Cisco Preps ACI for General Availability: What to Expect - Lippis Report
Lippis Report Dec 2011
-
Upload
rowansakul -
Category
Documents
-
view
50 -
download
7
Transcript of Lippis Report Dec 2011
Open Industry Network Performance & Power Test for
Cloud Networks
Evaluating 10/40 GbE Switches
Fall 2011 Edition
December, 2011
© Lippis Enterprises, Inc. 2011
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
2 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Forward by Scott Bradner
It is now almost twenty years since the Internet Engineering Task Force (IETF) initially chartered the Benchmarking Methodology Working Group (bmwg) with me as chair. The aim of the working group was to develop standardized terminology and testing methodology for various performance tests on network devices such as routers and switches.
At the time that the bmwg was formed, it was almost impossible to compare products from different vendors without doing testing yourself because each vendor did its
own testing and, too often, designed the tests to paint their products in the best light. The RFCs produced by the bmwg provided a set of standards that network equipment vendors and testing equipment vendors could use so that tests by different vendors or different test labs could be compared.
Since its creation, the bmwg has produced 23 IETF RFCs that define performance testing terminology or methodology for specific protocols or situations and are currently working on a half dozen more. The bmwg has also had three different chairs since I resigned in 1993 to join the IETF’s steering group.
The performance tests in this report are the latest in a long series of similar tests produced by a number of testing labs. The testing methodology follows the same standards as I was using in my own test lab at Harvard in the early 1990s thus the results are comparable. The comparisons would not be all that useful since I was dealing with far slower speed networks, but a latency measurement I did in 1993 used the same standard methodology as do the latency tests in this report.
Considering the limits on what I was able to test way back then, the Ixia test setup used in these tests is very impressive indeed. It almost makes me want to get into the testing business again.
Scott is the University Technology Security Officer at Harvard University. He writes a weekly column for Network World, and serves as the Secretary to the Board of Trustees of the Internet Society (ISOC). In addition, he is a trustee of the American Registry of Internet Numbers (ARIN) and author of many RFC network performance standards used in this industry evaluation report.
License Rights
© 2011 Lippis Enterprises, Inc. All rights reserved. This report is being sold solely to the purchaser set forth below. The report, including the written text, graphics, data, images, illustrations, marks, logos, sound or video clips, photographs and/or other works (singly or collectively, the “Content”), may be used only by such purchaser for informational purpos-es and such purchaser may not (and may not authorize a third party to) copy, transmit, reproduce, cite, publicly display, host, post, perform, distribute, alter, transmit or create derivative works of any Content or any portion of or excerpts from the Content in any fashion (either externally or internally to other individuals within a corporate structure) unless specifically authorized in writing by Lippis Enterprises. The purchaser agrees to maintain all copyright, trademark and other notices on the Content. The Report and all of the Content are protected by U.S. and/or international copyright laws and conventions, and belong to Lippis Enterprises, its licensors or third parties. No right, title or interest in any Content is transferred to the purchaser.
Purchaser Name:
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
3 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Acknowledgements
We thank the following people for their help and support in making possible this first industry evaluation of 10GbE private/public data center cloud Ethernet switch fabrics.
Atul Bhatnagar, CEO of Ixia, for that first discussion of this project back at Interop in May 2009 and his continuing support throughout its execution.
Scott O. Bradner of Harvard University for his work at the IEFT on network performance benchmarks, used in this project, and the Forward of this report.
Leviton for the use of fiber optic cables equipped with optical SFP+ connectors to link Ixia test equipment to 10GbE switches under test.
Siemon for the use of cooper and fiber optic cables equipped with QSFP+ connectors to link Ixia test equipment to 40GbE switches under test.
Michael Githens, Lab Program Manager at Ixia, for his technical competence, extra effort and dedication to fairness as he executed test week and worked with participating vendors to answer their many questions.
Henry He, Technical Product Manager, Kelly Maloit, Director of Public Relations, Tara Van Unen, Director of Market Development, Jim Smith, VP of Marketing, all from Ixia, for their support and contributions to creating a successful industry event.
Bob Anderson, Photography and Videography at Ixia, for video podcast editing.
All participating vendors and their technical plus marketing teams for their support of not only the event but to each other, as many provided helping hands on many levels to competitors.
Tiffany O’Loughlin for her amazing eye in capturing test week with still and video photography.
Bill Nicholson for his graphic artist skills that make this report look amazing.
Jeannette Tibbetts for her editing that makes this report read as smooth as a test report can.
Steven Cagnetta for his legal review of all agreements.
© Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
4 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Table of Contents
Executive Summary .............................................................................................................................................5
Market Background .............................................................................................................................................8
Participating Suppliers
Alcatel-Lucent OmniSwitch 10K ......................................................................................................13
Aclatel-Lucent OmniSwitch 6900-40X ..............................................................................................17
Arista 7504 Series Data Center Switch ..............................................................................................23
Arista 7124SX 10G SFP Data Center Switch ....................................................................................27
Arista 7050S-64 10/40G Data Center Switch ...................................................................................32
IBM BNT RackSwitch G8124 ............................................................................................................37
IBM BNT RackSwitch G8264 ............................................................................................................41
Brocade VDXTM 6720-24 Data Center Switch ...................................................................................45
Brocade VDXTM 6730-32 Data Center Switch ...................................................................................49
Extreme Networks BlackDiamond® X8 .............................................................................................54
Extreme Networks Summit® X670V ..................................................................................................62
Dell/Force10 S-Series S4810 ...............................................................................................................67
Hitachi Cable, Apresia15000-64XL-PSR ..........................................................................................71
Juniper Network EX Series EX8200 Ethernet Switch ....................................................................75
Mellanox/Voltaire® VantageTM 6048 ....................................................................................................80
Cross Vendor Analysis
Top-of-Rack Switches ..........................................................................................................................84
Latency ...........................................................................................................................................85
Throughput ....................................................................................................................................92
Power ............................................................................................................................................104
Top-of-Rack Switches Industry Recommendations ......................................................................105
Core Switches......................................................................................................................................106
Latency .........................................................................................................................................107
Throughput ..................................................................................................................................109
Power ............................................................................................................................................115
Core Switches Industry Recommendations ...................................................................................117
Test Methodology .............................................................................................................................................119
Terms of Use ......................................................................................................................................................124
About Nick Lippis ............................................................................................................................................125
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
5 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Executive Summary
To assist IT business leaders with the design and procure-ment of their private or public data center cloud fabric, the Lippis Report and Ixia have conducted an open industry evaluation of 10GbE and 40GbE data center switches. In this report, IT architects are provided the first comparative 10 and 40 Gigabit Ethernet Switch (10GbE) performance information to assist them in purchase decisions and prod-uct differentiation.
The resources available for this test at Ixia’s iSimCity are out of reach for nearly all corporate IT departments with test equipment on the order of $9.5M, devices under test on the order of $2M, plus costs associated with housing, power and cooling the lab, and lastly nearly 22 engineers from around the industry working to deliver test results. It’s our hope that this report will remove performance, power consump-tion, virtualualization scale and latency concern from the purchase decision, allowing IT architects and IT business leaders to focus on other vendor selection criteria, such as post sales support, platform investment, vision, company financials, etc.
The Lippis test reports based on independent validation at Ixia’s iSim City, communicates credibility, competence, openness and trust to potential buyers of 10GbE and 40GbE data center switching equipment as the tests are open to all suppliers and are fair, thanks to RFC and custom-based tests that are repeatable. The private/public data center cloud 10GbE and 40GbE fabric test was free for vendors to participate and open to all industry suppliers of 10GbE and 40GbE switching equipment, both modular and fixed configurations.
This report communicates the results of three set of test which took place during the week of December 6 to 10, 2010, April 11 to 15, 2011 and October 3 to 14, 2011 in the modern Ixia test lab, iSimCity, located in Santa Clara, CA. Ixia supplied all test equipment needed to conduct the tests while Leviton provided optical SPF+ connectors and optical cabling. Siemon provided copper and optical cables equippped with QSFP+ connectors for 40GbE connections. Each 10GbE and 40GbE supplier was allocated lab time to run the test with the assistance of an Ixia engineer. Each
switch vendor configured their equipment while Ixia engi-neers ran the test and logged the resulting data.
The tests conducted were IETF RFC-based performance and latency test, power consumption measurements and a custom cloud-computing simulation of large north-south plus east-west traffic flows. Virtualization scale or how was calcluated and reported. Both Top-of-Rack (ToR) and Core switches were evaluated.
ToR switches evaluated were:
Alcatel-Lucent OmniSwitch 6900-40X
Arista 7124SX 10G SFP Data Center Switch
Arista 7050S-64 10/40G Data Center Switch
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8264
Brocade VDXTM 6720-24 Data Center Switch
Brocade VDX™ 6730-32 Data Center Switch
Extreme Networks Summit® X670V
Dell/Force10 S-Series S4810
Hitachi Cable, Apresia15000-64XL-PSR
Mellanox/Voltaire® VantageTM 6048
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
6 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core switches evaluated were:
Alcatel-Lucent OmniSwitch 10K
Arista 7504 Series Data Center Switch
Extreme Networks BlackDiamond® X8 Data Center Switch
Juniper Network EX Series EX8200 Ethernet Switch
From the above list the following products were tested in October, 2011 and added to the products evaluated in December 2010 and April 2011.
Alcatel-Lucent OmniSwitch 6900-40X
Brocade VDXTM 6730-32 Data Center Switch
Extreme Networks BlackDiamond® X8
Extreme Networks Summit® X670V
Video feature: Click to view “What We Have Learned” video podcast
What We Have Learned from 18 Months of Testing:
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
7 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The following list our top ten findings:
1. 10GbE ToR and Core switches are ready for mass deployment in private/public data center cloud computing facilities, as the technology is mature, and products are both stable and deliver stated goals of high performance, low latency and low power consumption.
2. There are differences between suppliers, and it’s recommended to review each supplier’s results along with other important information in making purchase decisions.
3. The Lippis/Ixia industry evaluations are the “litmus test” for products to enter the market, as only those firms who are confident in their products and ready to go to market participate. For each of the three industry test, all ToR and Core switches evaluated in the Lippis/Ixia test were recently introduced to market where the Lippis/Ixia test was their first open public evaluation.
4. Most ToR switches are based upon a new genera-tion of merchant silicon that provides a single chip-forwarding engine of n-10GbE and n-40GbE ports that delivers consistent performance and low power consumption.
5. All ToR and Core switches offer low power con-sumption with energy cost over a three-year period estimated between 1.3% and 4% of acquisition cost. We measure WATTS per 10GbE port via ATIS methods and TEER values for all switches evaluat-ed. It’s clear that these products represent the state-of-the-art in terms of energy conservation. ToR switches consume between 1.5 and 5.5 WATTS per 10GbE port while Core switches consume between 8.1 and 21.68 WATTS per 10GbE port.
6. Most switches design airflow from front-to-back or back-to-front rather than side-to-side to align with data center hot/cold aisle cooling design.
7. There are differences between suppliers in terms of the network services they offer, such as virtualiza-tion support, quality of service, etc., as well as how their Core/Spine switches connect to ToR/Leaf switches to create a data center fabric. It’s recom-mended that IT business leaders evaluate Core switches with ToR switches and vice versa to assure that the network services and fabric attributes sought after are realized throughout the data center.
8. Most Core and ToR switches demonstrated the per-formance and latency required to support storage enablement or converged I/O. In fact, all suppli-ers have invested in storage enablement, such as support for Converged Enhanced Ethernet (CEE), Fibre Channel over Ethernet (FCoE), Internet Small Computer System Interface (iSCSI), Network-at-tached Storage (NAS), etc., and while these features were not tested in the Lippis/Ixia evaluation, most of these switches demonstrated that the raw capac-ity is built into the switches for its support.
9. Three ToR switches support 40GbE while all Core switches possess the backplane speed capacity for some combination of 40 and 100GbE. Core switch-es tested here will easily scale to support 40GbE and 100GbE while ToR suppliers offer uplinks at these speeds. Of special note is the Extreme Black-Diamond® X8 which was the first core switch tested with 24-40GbE ports, marking a new generation of data center core products entering the market.
10. The Lippis/Ixia test demonstrated the performance and power consumption advantages of 10GbE net-working, which can be put to work and exploited for corporate advantage. For new server deploy-ments in private/public data center cloud networks, 10GbE is recommended as the primary network connectivity service as a network fabric exists to take full advantage of server I/O at 10GbE band-width and latency levels. 40GbE connections are expected to ramp up during 2012 for ToR to Core connections with server connections in select high performance environments.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
8 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Market Background
The following market background section provides perspec-tive and context to the major changes occurring in the IT industry, and its fundamental importance to the network switches that were evaluated in the Lippis/Ixia test.
The IT industry is in the midst of a fundamental change to-ward centralization of application delivery via concentrated and dense private and public data center cloud computing sites. Corporations are concentrating IT spending in data centers and scaling them up via server virtualization to reduce complexity and associated operational cost while enjoying capital cost savings advantage. In fact, there is a new cloud-o-nomics that is driven by the cost advantage of multi-core computing coupled with virtualization that has enabled a scale of computing density not seen before. Cou-ple this with a new tier of computing, Apple’s iPad, Android tablets plus smartphones that rely upon applications served from private and public cloud computing facilities, and you have the making of a systemic and fundamental change in data center design plus IT delivery.
The world has witnessed the power of this technology and how it has changed the human condition and literally the world. On May 6, 2010, the U.S. stock market experienced a flash crash that in 13 short seconds, 27,000 contracts traded consisting of 49% of trading volume. Further, in 15 minutes $1 trillion of market value disappeared. Clusters of comput-ers connected at 10 and 40GbE programmed to perform high frequency trades based upon proprietary algorithms did all this. The making of the movie Avatar was produced at WETA Digital with 90% of the movie developed via giant computer clusters connected at 10GbE to process and ren-der animation. As big science cost has soared, scientists have moved to high performance computing clusters connected at 10 and 40 GbE to simulate rather than build scientific ex-periments. Biotech engineers are analyzing protein-folding simulations while military scientists simulate nuclear explo-sions rather than detonate these massive bombs. Netflix has seen its stock price increase and then crash as it distributes movies and TV programs over the Internet and now threat-ens Comcast and others on-demand business thanks to its clusters of computers connected at 10GbE. At the same time, Blockbuster Video filed for bankruptcy.
A shift in IT purchasing is occurring too. During the market crash of 2008, demand for information increased exponen-tially while IT business decision makers were forced to scale back budgets. These diverging trends of information de-mand and IT budget contraction have created a data center execution gap where IT leaders are now forced to upgrade their infrastructure to meet information and application demand. To close the gap, IT business decision makers are investing in 10 and 40GbE switching. Over the past several quarters, sales of 10GbE fixed and modular switches have grown in the 60% to 70% range and now represent 25% of all ethernet shipments. Note that data center etherent ports represent just 10% of the market, but nearly 40% of revenue. 40GbE shippments are expected to ramp during 2012.
At the center of this new age in private and public data center cloud computing is networking, in particular Ether-net networking that links servers to storage and the internet providing high-speed transport for application traffic flow. As the data center network connects all devices, it is the sin-gle most important IT asset to assure end users receive an excellent experience and service is continually available. But in the virtualization/cloud era of computing, networking too is fundamentally changing as demand for low latency, high performance and lower power consumption switches increase to address radically different traffic patterns. In addition, added network services, such as storage enable-ment, are forcing Ethernet to become not just a connectivity service in the data center but a fabric that connects comput-ing and storage; transporting data packets plus blocks of storage traffic. In short, IT leaders are looking for Ethernet to provide a single data center fabric that connects all IT assets with the promise of less equipment to install, simpler management and greater speed to IT service delivery.
The following characterize today’s modern private and pub-lic data center cloud network fabric.
Virtualization: The problems with networking in virtualized infrastructure are well documented. Move a virtual machine (VM) from one physical server to another and network port profile, Virtual Local Area Networking (VLAN), security settings, etc., have to be reconfigured,
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
9 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
adding cost, delay and rigidity to what should be an adap-tive infrastructure. Many networking companies are ad-dressing this issue by making their switches virtualization aware, auto reconfiguring and/or recommending a large flat layer 2 domain. In addition, networking companies are offering network management views that cross administra-tive domains of networking and virtualization in an effort to provide increased visibility and management of physical and virtual infrastructure. The fabric also needs to eliminate the boundary between physical and virtual infrastructure, both in terms of management and visibility plus automated network changes as VMs move.
Traffic Patterns Shift with Exponential Volume In-
crease: Data centers are experiencing a new type of traffic pattern that is fundamentally changing network design plus latency and performance attributes. Not only are traditional north-south or client-server flows growing but east-west or server-server and storage-server flows now dominate most private and public data center cloud facilities. A dominate driver of east-west flows is the fact that the days of static web sites are over while dynamic sites take over where one page request can spawn 50 to 100 connections across mul-tiple servers.
Large contributors to east-west flows are mobile devices. Mobile application use is expanding exponentially, thanks to the popularity of the iPhone, iPad and increasingly An-droid smartphones plus tablets. As of this writing, there are some 700,000 plus smartphone applications. The traffic load these applications are placing on data center Ethernet fab-rics is paradoxically immense. The vast majority of mobile applications are hosted in data centers and/or public cloud facilities. The application model of mobile devices is not to load them with thick applications like Microsoft Word, PowerPoint, Excel, etc, but to load them with thin clients that access applications and data in private and/or public data centers, cloud facilities.
A relatively small query to a Facebook page, New York Times article, Amazon purchase, etc., sends a flurry of east-west traffic to respond to the query. Today’s dynamic web spreads a web page over multiple servers and when called upon, drives a tsunami of traffic, which must be delivered
well before the person requesting the data via mobile device or desktop loses interest and terminates the request.
East-west traffic will only increase as this new tier of tablet computing is expected to ship 171 million units in 2014 up from 57 million this year, according toIsupplie. Add tens of millions of smartphones to the mix, all relying on private/public data center cloud facilities for their applications, and you have the ingredients for a sustained trend placing increased pressure for east-west traffic flows upon Ethernet data center fabrics.
Low Latency: Data experiences approximately 10 ns of la-tency as it traverses across a computer bus passing between memory and CPU. Could networking become so fast that it will emulate a computer bus so that a data center operates like one giant computer? The answer is no, but the industry is getting closer. Today’s 10GbE switches produce 400 to 700 ns of latency. By 2014, it’s anticipated that 100GbE switch-ing will reduce latency to nearly 100 ns.
Special applications, especially in high frequency trading and other financial markets, measure latency in terms of millions of dollars in revenue lost or gained, placing enor-mous pressure on networking gear to be as fast as engineers can design. As such, low latency is an extremely important consideration in the financial services industry. For ex-ample, aggregating all 300 plus stock market feeds would generate approximately 3.6Gbs of traffic, but IT architects in this market choose 10GbE server connections versus 1GbE for the sole purpose of lower latency even though the band-width is nearly three times greater than what theoretically would be required. They will move to 40GbE too as 40GbE is priced at three to four times that of 10GbE for four times the bandwidth.
More generally, latency is fundamental to assuring a user’s excellent experience. With east-west traffic flows making up as much as 80% of data center traffic, low latency require-ments are paramount as every trip a packet makes between servers during the processes of responding to an end user’s query adds delay to the response and reduces the user expe-rience or revenue potential.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
10 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Storage Traffic over Ethernet Fabric: Converged I/O or unified networking where storage and network traffic flow over a single Ethernet network will increasingly be adopted in 2012. Multiple storage enablement options exist, such as iSCSI over Ethernet, ATA over Ethernet (AoE), FCoE, etc. For IP-based storage approaches, such as iSCSI over Ether-net and AoE, all that is needed is a 10GbE network interface card (NIC) in a server to support both storage and data traffic, but the Ethernet fabric requires lossless Ethernet to assure integrity of storage transport over the fabric as blocks of data transmit between server and storage farms.
For FCoE, a single converged network adaptor or CNA plugged into a server provides the conduit for storage and application traffic flows to traverse over an Ethernet fabric. The number of suppliers offering CNAs has grown signifi-cantly, including Intel, HP, Emulex, IBM, ServerEngines, QLogic, Cisco, Brocade, etc. In addition, the IEEE opened up the door for mass deployment as it has ratified the key Ethernet standards for lossless Ethernet. What will drive converged I/O is the reduced cost of cabling, NIC and switching hardware.
Low Power Consumption: With energy costs increasing plus green corporate initiatives as well as government man-dates pervasive, all in an effort to reduce carbon emissions, IT leaders have been demanding lower energy consumption of all IT devices, including data center network equipment. The industry has responded with significant improvements in energy efficiency and overall lower wattage consumption per port while 10 and 40 GbE switches are operating at full line rate.
Virtualized Desktops: 2012 promises to be a year of increased virtualized desktop adoption. Frustrated with en-terprise desktop application licensing, plus desktop support model, IT business leaders will turn toward virtualizing desktops at increasing numbers. The application model of virtualized desktops is to deliver a wide range of corporate applications hosted in private/public data center clouds over the enterprise network. While there are no estimates to the traffic load this will place on campus and data center Eth-
ernet networking, one can only assume it will add north-south traffic volume to an already increasing and dominat-ing east-west flows.
Fewer Network Tiers: To deliver low latency and high throughput performance to support increasing and chang-ing traffic profiles mentioned above, plus ease VM moves without network configuration changes, a new two-tier pri-vate/public data center cloud network design has emerged. A three-tier network architecture is the dominant structure in data centers today and will likely continue as the optimal design for many smaller data center networks. For most network architects and administrators, this type of design provides the best balance of asset utilization, layer 3 routing for segmentation, scaling and services plus efficient physical design for cabling and fiber runs.
By three tiers, we mean access switches/ToR switches, or modular/End-of-Row switches that connect to servers and IP-based storage. These access switches are connected via Ethernet to aggregation switches. The aggregation switches are connected into a set of Core switches or routers that for-ward traffic flows from servers to an intranet and internet, and between the aggregation switches. It is common in this structure to oversubscribe bandwidth in the access tier, and to a lesser degree, in the aggregation tier, which can increase latency and reduce performance. Spanning Tree Protocol or STP between access and aggregation plus aggregation and core further drive oversubscription. Inherent in this struc-ture is the placement of layer 2 versus layer 3 forwarding that is VLANs and IP routing. It is common that VLANs are constructed within access and aggregation switches, while layer 3 capabilities in the aggregation or Core switches route between them.
Within the private and public data center cloud network market, where the number of servers are in the thousands to tens of thousands plus, east-west bandwidth is significant and applications require a single layer 2 domain, the exist-ing Ethernet or layer 2 capabilities within this tiered archi-tecture do not meet emerging demands.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
11 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
An increasingly common design to scale a data center fabric is often called a “fat-tree” or “leaf-spine” and consists of two kinds of switches; one that connects servers and the second that connects switches creating a non-blocking, low-latency fabric. We use the terms “leaf ” or “Top of Rack (ToR)” switch to denote server connecting switches and “spine” or “core” to denote switches that connect leaf/ToR switches. Together, a leaf/ToR and spine/core architecture create a scalable data center fabric. Key to this design is the elimi-nation of STP with some number of multi-links between leaf and spine that eliminate oversubscription and enable a non-blocking fabric, assuming the switches are designed with enough backplane capacity to support packet forward-ing equal to the sum of leaf ingress bandwidth. There are multiple approaches for connecting leaf and spine switches at high bandwidth, which fall under the category of Multi-Chassis Link Aggregation Group or MC-LAG covered in project IEEE 802.3ad.
Paramount in the two-tier leaf-spine architecture is high spine switch performance, which collapses the aggrega-tion layer in the traditional three-tier network. Another design is to connect every switch together in a full mesh, via MC-LAG connections with every server being one hop away from each other. In addition, protocols such as TRILL or Transparent Interconnection of Lots of Links, SPB or Shortest Path Bridging, et al have emerged to connect core switches together at high speed in an active-active mode.
The above captures the major trends and demands IT business leaders are requiring from the networking indus-try. The underpinnings of private and public data center cloud network fabric are 10GbE switching with 40GbE and 100GbE ports/modules. 40GbE and 100GbE are in limited availability now but will be increasingly offered and adopted during 2012. Network performance including throughput performance and latency are fundamental switch attributes to understand and review across suppliers. Because if the 10/40GbE switches an IT leader selects cannot scale per-formance to support increasing traffic volume plus shifts in traffic profile, not only will the network fail to be a fabric unable to support converge storage traffic, but business processes, application performance and user experience will suffer too.
During 2012, an increasing number of servers will be equipped with 10GbE LAN on Motherboard (LOM) driving 10GbE network requirements. In addition, with nearly 80% of IT spend being consumed in data center infrastructure with all IT assets eventually running over 10/40GbE switch-ing, the stakes could not be higher to select the right prod-uct upon which to build this fundamental corporate asset. Further, data center network equipment has the longest life span of all IT equipment; therefore, networking is a long-term investment and vendor commitment.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
12 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Participating Suppliers
This was the first time that all participating vendors’ switch-es were tested in an open public forum. In addition, the ToR switches all utilized a new single chip design sourced from Broadcom, Marvel or Fulcrum. This chip design supports up to 64 10GbE ports on a single chip and represents a new generation of ToR switches. These ToR or leaf switches with price per port as low as $350 per 10GbE are the fastest growing segment of the 10GbE fixed configuration switch market. Their list price points vary $12K to $25K. ToR switches tested were:
Arista 7124SX 10G SFP Data Center Switch
Arista 7050S-64 10/40G Data Center Switch
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8264
Brocade VDXTM 6720-24 Data Center Switch
Dell/Force10 S-Series S4810
Hitachi Cable, Apresia15000-64XL-PSR
Mellanox/Voltaire® VantageTM 6048
All Core switches including Juniper’s EX8216, Arista Net-works’ 7504, and Alcatel-Lucent’s OmniSwitch 10K were tested for the first time in an open and public forum too. These “spine” switches make up the core of cloud network-ing. A group of spine switches connected in partial or full mesh creates an Ethernet fabric connecting ToR or leaf switches and associated data plus storage traffic. These Core switches range in list price from $250K to $800K and price per 10GbE port of $1.2K to $6K depending upon port density and software license arrangement. The Core/Spine switches tested were:
Alcatel-Lucent OmniSwitch 10K
Arista 7504 Series Data Center Switch
Juniper Network EX Series EX8200 Ethernet Switch
We present each supplier’s results in alphabetical order.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
13 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Alcatel-Lucent launched its new entry into the enterprise data center market on December 17, 2010, with the Om-niSwitch™ 10K. The OmniSwitch was the most densely-populated device tested with 256 ports of 10GbE. The test numbers below represent the first public performance and power consumption measurements for the OmniSwitch™ 10K. The Alcatel-Lucent OmniSwitch™ 10K Modular Ethernet LAN Chassis is the first of a new generation of network adaptable LAN switches. It exemplifies Alcatel-Lucent’s approach to enabling what it calls Application Fluent Net-works, which are designed to deliver a high-quality user experience while optimizing the performance of legacy, real-time, and multimedia applications.
Alcatel-Lucent OmniSwitch™ 10K
Alcatel-Lucent OmniSwitch 10K Test ConfigurationHardware Software Version Port Density
Device under testOmniSwitch 10K www.alcatel-lucent.com
7.1.1.R01.1638 256
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Alcatel-Lucent OmniSwitch 10K* RFC 2544 Layer 2 Latency Test
Alcatel-Lucent OmniSwitch 10K* RFC 2544 Layer 3 Latency Test
0
50000
100000
150000
200000
250000
300000
350000
400000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 23700 23600 23500 24000 24280 24480 25700 38580 334680
Avg Latency 20128 20567 20638 20931 21211 21793 22658 25242 45933
Min Latency 9160 9360 8760 9260 9740 10340 10260 10980 20180
Avg. delay Variation 9 5 5 8 7 4 10 5 10
ns Max Latency
Avg Latency
Min Latency
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 23600 23420 23580 23840 24040 24480 25640 42960 158600
Avg Latency 20864 20561 20631 20936 21216 21787 22668 25255 36823
Min Latency 8480 8940 9280 9500 10380 10300 10960 11440 20420
Avg. delay Variation 9 5 5 8 7 5 10 5 10
nsMax Latency
Avg Latency
Min Latency
The OmniSwitch 10K was tested across all 256 ports of 10GbE. Its average la-tency ranged from a low of 20561 ns or 20 µs to a high of 36,823 ns or 36 µs at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay varia-tion was ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate.
Video feature: Click to view Alcatel-Lucent video podcast
* These RFC 2544 tests were conducted with default configuration. Lower latency configuration mode will be available in the future.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
14 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The OmniSwitch 10K demonstrated 100% throughput as a percentage of line rate across all 256-10GbE ports. In other words, not a single packet was dropped while the OmniSwitch 10K was presented with enough traffic to populate its 256 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
The OmniSwitch 10K demonstrated nearly 80% of aggregated forward-ing rate as percentage of line rate dur-ing congestion conditions. A single 10GbE port was flooded at 150% of line rate. The OmniSwitch did not use HOL blocking which means that as the 10GbE port on the OmniSwitch be-came congested, it did not impact the performance of other ports. There was no back pressure detected, and the Ixia test gear did not receive flow control frames.
For layer 3 traffic, the OmniSwitch 10K’s measured average latency ranged from a low of 20,128 ns at 64Bytes to a high of 45,933 ns or 45µs at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 10 ns, providing consistent latency across all packet sizes at full line rate.
0%
20%
40%
60%
80%
100%
64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0102030405060708090
100
64 128 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 77.784 77.79 77.786 77.772 77.775 77.813 77.828 77.857 78.39
Layer 3 Agg. Forwarding Rate 77.784 77.79 77.786 77.772 77.775 77.813 77.828 77.856 78.39
% Line Rate Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
Head of Line Blocking no no no no no no no no no
Back Pressure no no no no no no no no no
Agg Flow Control Frames 0 0 0 0 0 0 0 0 0
Alcatel-Lucent OmniSwitch 10K RFC 2544 L2 & L3 Throughput Test
Alcatel-Lucent OmniSwitch 10K RFC 2889 Congestion Test
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
15 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The OmniSwitch 10K demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 9,596 ns at 64Byte size packet to 28,059 ns at 9216Byte size packets.
0
5
10
15
20
25
30
0102030405060708090
100
64 128 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100
Agg. Average Latency 9.6 9.6 9.8 10.3 11.1 11.6 11.9 13.0 28.1
µs
Agg Tput Rate (%) Agg. Average Latency
Agg Tput Rate (%)
Alcatel-Lucent OmniSwitch 10K RFC 3918 IP Multicast Test
The OmniSwitch 10K performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its la-tency stayed under 28µs.
The OmniSwitch 10K represents a new breed of cloud network spine switches with power efficiency being a core val-ue. Its WattsATIS/port is 13.34 and TEER value is 71. Its power cost per 10GbE is estimated at $16.26 per year. The three-year cost to power the OmniSwitch is estimated at $12,485.46 and represents less than 3% of its list price. Keeping with data center best practices, its cool-ing fans flow air front to back.
Alcatel-Lucent OmniSwitch10K Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 28125
East-West Server_to_Database 14063
East-West HTTP 19564
East-West iSCSI-Server_to_Storage 17140
East-West iSCSI-Storage_to_Server 26780
North-South Client_to_Server 13194
North-South Server_to_Client 11225
Alcatel-Lucent OmniSwitch10K Power Consumption Test
WattsATIS/10GbE port 13.34
3-Year Cost/WattsATIS/10GbE $48.77
Total power cost/3-Year $12.485.46
3 yr energy cost as a % of list price 2.21%
TEER Value 71
Cooling Front to Back
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
16 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
The OmniSwitch™ 10K seeks to improve application per-formance and user experience with deep packet buffers, lossless virtual output queuing (VOQ) fabric and extensive traffic management capabilities. This architecture proved its value during the RFC2889 layer 2 and layer 3 congestion test with a 78% aggregated forwarding rate when a single 10GbE port was oversubscribed at 150% of line rate. The OmniSwitch™ 10K did not use HOL blocking, back pressure or signal back to the Ixia test equipment with Aggregated Flow Control Frames to slow down traffic flow. Not tested but notable features are its security and high availability design for uninterrupted uptime.
The OmniSwitch™ 10K was found to have low power con-sumption, front-to-back cooling, front-accessible compo-nents and a compact form factor. The OmniSwitch™ 10K is designed to meet the requirements for mid- to large-sized enterprises data centers.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
17 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Alcatel-Lucent (ALU) OmniSwitchTM 6900 Stackable LAN Switch series is the latest hardware platform in Alcatel-Lu-cent Enterprise’s network infrastructure portfolio, following the introduction of the OmniSwitchTM 10K and the Om-niSwitchTM 6850E. The OmniSwitchTM
6900 series is a fixed form factor LAN switch designed for data centers while at the same time offering great value as a small core switch in a converged enter-prise network.
The OmniSwitchTM 6900 represents the next step in realizing Alcatel-Lucent Enterprise’s Application Fluent Net-work (AFN) vision of network infra-structure to the data center and LAN core. Alcatel-Lucent’s AFN vision is based upon a resilient architecture with the network dynamically adapting to applications, users and devices to pro-vide a high quality user experience, as well as reducing operation costs and management complexity.
Alcatel-Lucent OmniSwitch™ 6900-X40
Alcatel-Lucent OmniSwitchTM 6900-X40 Test ConfigurationHardware Software Version Port Density
Device under testOmniSwitchTM 6900-X40 www.alcatel-lucent.com
7.2.1.R02 64
Test Equipment Ixia XG12 High Performance Chassis IxOS 6.10 EA SP2
Ixia Line Cards Xcellon Flex AP10G16S (16 port 10G module)
IxNetwork 6.10 EA
Xcellon Flex Combo 10/40GE AP (16 port 10G / 4 port 40G)
Xcellon Flex 4x40GEQSFP+ (4 port 40G)
http://www.ixiacom.com/
Cabling 10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010
Siemon Moray Low Power Active Optical Cable Assemblies Single Mode, QSFP+ 40GbE optical cable QSFP30-03 7 meters
www.siemon.com
Video feature: Click to view Alcatel-Lucent video podcast
The OmniSwitchTM 6900 series consists of two models: the OmniSwitchTM 6900-X40 and the OmniSwitchTM 6900-X20. They provide 10 GbE connectivity with industry-leading port density, switch-ing capacity and low power consump-tion in a compact 1 rack unit (1RU) form factor. The OmniSwitchTM 6900-X40 can deliver up to 64 10GbE ports, while the OmniSwitchTM 6900-X20 can deliver up to 32 10 GbE ports wire-rate non-blocking performance with unique high availability and resiliency features as demanded in data centers and core of converged networks.
The OmniSwitchTM 6900 series with two plug-in module bays is a flex-ible 10GbE fixed configuration switch, which can accommodate up to 6 40GbE ports for the OS6900-X40. The Om-niSwitchTM 6900 series provides a long term sustainable platform that supports 40GbE connectivity as well as special-ized data center convergence features such as Data Center Bridging (DCB), Shortest Path Bridging (SPB) and Fibre Channel over Ethernet (FCoE) without any change out hardware.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
18 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
For the Fall 2011 Lippis/Ixia test we populated and tested the ALU Om-niSwitchTM 6900-X40’s 10 and 40GbE functionality and performance. The ALU OmniSwitchTM 6900-X40 switch was tested across all 40-10GbE and 6-40GbE ports. Its average latency ranged from a low of 792 ns to a high of 897 ns for layer 2 traffic measured in store and forward mode. Its aver-age delay variation ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate, a comforting result giving the ALU Om-niSwitchTM 6900-X40’s support of con-verged network/storage infrastructure.
For layer 3 traffic, the ALU Om-niSwitchTM 6900-X40’s measured aver-age latency ranged from a low of 809 ns at jumbo size 9216 Bytes to a high of 910 ns. Its average delay variation for layer 3 traffic ranged between 3 and 9 ns, providing consistent latency across all packet sizes at full line rate.
0%
20%
40%
60%
80%
100%
64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
Alcatel-Lucent OmniSwitchTM 6900-X40* RFC 2544 L2 & L3 Throughput Test
Alcatel-Lucent OmniSwitchTM 6900-X40* RFC 2544 Layer 2 Latency Test
Alcatel-Lucent OmniSwitchTM 6900-X40* RFC 2544 Layer 3 Latency Test
0
200
400
600
800
1000
1200
1400
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 1220 1220 1260 1220 1160 1200 1160 1160 1200 1200
Avg Latency 891.41 885.37 910.72 880.76 838.76 850.94 828.33 812.83 819.72 809.94
Min Latency 760 760 740 700 680 680 660 670 665 645
Avg. delay Variation 7.20 5.89 9.22 5.44 7.65 7.00 3.50 9.13 4.94 9.37
ns
Max Latency Avg Latency Min Latency
0
200
400
600
800
1000
1200
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 1060 1120 1140 1060 1120 1080 1020 1040 1020 1020
Avg Latency 869.37 880.46 897.07 860.74 822.48 833.37 808.98 796.07 802.65 792.39
Min Latency 760 780 740 700 680 700 680 640 680 660
Avg. delay Variation 7.70 5.48 10.04 5.70 7.96 7.41 3.33 10.33 5.11 9.24
ns
Max Latency Avg Latency Min Latency
* Test was done between far ports to test across chip performance/latency.
The ALU OmniSwitchTM 6900-X40 demonstrated 100% throughput as a percentage of line rate across all 40-10GbE and 6-40GbE ports. In other words, not a single packet was dropped while the ALU OmniSwitchTM 6900-X40 was presented with enough traffic to populate its 40-10GbE and 6-40GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
19 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no no no
L2 Back Pressure yes yes yes yes yes yes yes yes yes
L2 Agg Flow Control Frames 133444 76142 106048 81612 63596 64864 58794 56506 54864
L3 Head of Line Blocking no no no no no no no no no
L3 Back Pressure yes yes yes yes yes yes yes yes yes
no
yes
60730
no
yes
L3 Agg Flow Control Frames 124740 76142 106014 81614 63592 64866 58796 56506 5486860730
Alcatel-Lucent OmniSwitchTM 6900-X40* RFC 2889 Congestion Test (150% of line rate into a single 10GbE)
Alcatel-Lucent OmniSwitchTM 6900-X40* RFC 2889 Congestion Test (150% of line rate into a single 40GbE)
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no no no
L2 Back Pressure yes yes yes yes yes yes yes yes yes
L2 Agg Flow Control Frames 518736 299492 414694 320628 250906 255842 231850 223580 217598
L3 Head of Line Blocking no no no no no no no no no
L3 Back Pressure yes yes yes yes yes yes yes yes yes
no
yes
239894
no
yes
L3 Agg Flow Control Frames 485364 299470 414742 320650 250876 255852 231842 223582 217600239892
Two congestion test were conducted using the same methodology. A 10GbE and 40GbE congestion test stressed the ALU OmniSwitchTM 6900-X40 ToR switch’s congestion management attri-butes for both 10GbE and 40GbE. The ALU OmniSwitchTM 6900-X40 demon-strated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for both 10GbE and 40GbE tests. A single 10GbE port was oversubscribed at 150% of line rate. In addition, a single 40GbE port was oversubscribed at 150% of line rate.
The ALU OmniSwitchTM 6900-X40 did not depict HOL blocking behavior, which means that as 10GbE and 40GbE ports became congested, it did not im-pact the performance of other ports. The OmniSwitchTM 6900-X40 did send flow control frames to the Ixia test gear signaling it to slow down the rate of in-coming traffic flow, which is common and expected in ToR switches.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
20 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The ALU OmniSwitchTM 6900-X40 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 1021 ns or 1µs at 64Byte size packet to 1055 ns or 1µs at 9216Byte size packets. The ALU OmniSwitchTM 6900-X40 demon-strated consistently lower IP multicast latency throughout for all packet sizes
1000
1010
1020
1030
1040
1050
1060
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100 100
Agg. Average Latency 1021.98 1026.76 1045.49 1043.53 1056.07 1057.22 1052.11 1055.53 1054.58 1055.64
nsAgg Tput Rate (%)
Alcatel-Lucent OmniSwitchTM 6900-X40* RFC 3918 IP Multicast Test
Alcatel-Lucent OmniSwitchTM 6900-X40 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 3719
East-West Server_to_Database 1355
East-West HTTP 2710
East-West iSCSI-Server_to_Storage 2037
East-West iSCSI-Storage_to_Server 3085
North-South Client_to_Server 995
North-South Server_to_Client 851
The ALU OmniSwitchTM 6900-X40 performed well under cloud simula-tion conditions by delivering 100% ag-gregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 4 μs measured in store and for-ward mode.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
21 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Power Measurement:
The ALU OmniSwitchTM 6900-X40 represents a new breed of cloud net-work core switches with power effi-ciency being a core value. Its WattsATIS/port is 3.38 and TEER value is 280; higher TEER values are better than lower. Note that the total number of OmniSwitchTM 6900-X40 10GbE ports to calculate Watts/10GbE were 64, as 6-40Gbs ports or 24 10GbE ports were added to the 40-10GbE ports for Watts/
ATIS/port calculation. The 40GbE ports are essentially four 10GbE ports from a power consumption point of view.
The ALU OmniSwitchTM 6900-X40 power cost per 10GbE is calculated at $12.36 per year. The three year cost to power the ALU OmniSwitchTM 6900-X40 is estimated at $790.87 and rep-resents less than 2.82% of its list price. Keeping with data center best practices, its cooling fans flow air front to back while a back to front option will be available in the coming months.
Virtualization Scale Calculation
Within one layer 2 domain, the ALU OmniSwitchTM 6900-X40 can support approximately 500 physical servers and 15,500 VMs. This assumes 30 VMs per physical server with each physical serv-er requiring two host IP/ MAC/ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data cen-ter switch one MAC entry, one 32/IP host route and one ARP entry. Clearly the ALU OmniSwitchTM 6900-X40 can support a much larger virtualized infra-structure than the number of its physi-cal ports assuring virtualization scale in layer 2 domains.
Alcatel-Lucent OmniSwitchTM 6900-X40 Power Consumption Test
WattsATIS/10GbE port 3.38
3-Year Cost/WattsATIS/10GbE $12.36
Total power cost/3-Year $790.87
3 yr energy cost as a % of list price 2.82%
TEER Value 280
Cooling Front to Back, Reversible
Power Cost/10GbE/Yr $4.12
List price $28,000
Price/port $437.50
Alcatel-Lucent OmniSwitchTM 6900-X40 Virtual Scale Calculation
/32 IP Host Route Table Size 16000*
MAC Address Table Size 128000
ARP Entry Siize 32000
Assume 30:1 VM:Physical Machine 32
Total number of Physical Servers per L2 domain 500
Number of VMs per L2 domain 15500
* /32 IP host route table size: 16,000 in Hardware. However, the software can support up to 32,000 routes for OSPFv2 and 128,000 for BGP.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
22 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
The Alcatel-Lucent (ALU) OmniSwitchTM 6900 Stackable LAN Switch series offers yet another option for ToR switch connections to servers while feeding into a core Om-niSwitchTM 10K at 40GbE. The combination of OmniSwitch 10K at the core and 6900 at the ToR offer multiple cloud network design options such as a two-tier, low latency, loss-less fabric capable of supporting network and storage traffic in highly dense virtualized infrastructure.
The ALU OmniSwitchTM 6900-X40 demonstrated low sub 900ns latency, wire speed 100% throughout, low 3.38 W/10GbE power consumption, consistently low 1µs la-tency of IP multicast traffic, high virtualization scale and all within a small 1RU form factor. Further, the OmniSwitchTM
6900-X40 demonstrated fault tolerance attributes required in private and public cloud environments as the Om-niSwitchTM 6900-X40 delivered 100% traffic throughput while CLI, STP and SNMP were shut down proving that its network operating system is resilient and that these pro-cesses can be shut down while the switch forwards traffic without packet loss. It’s buffer architecture is sound as its link aggregation group (LAG) hashing allows the Om-niSwitchTM 6900-X40 to be connected to the OmniSwitchTM 10K or other core switches at 80Gbps reliably. Based upon the OmniSwitchTM 6900-X40’s test results this ToR switch is well targeted for both the enterprise and service provider data center plus enterprise campus of a converged enter-prise network.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
23 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Arista 7504 Series Data Center Switch
On January 8, 2011, Arista Networks launched a new member of its 7500 series family called the 7504 modu-lar switch that supports a 192 10GbE interface. The test data and graphics below represent the first public perfor-mance and power consumption mea-surements of the 7504. The 7504 is a smaller version of its award-winning and larger 384 10GbE port 7508 data center switch.
The Arista 7504 was tested across all 192 ports of 10GbE. Its average latency ranged from a low of 6,831 ns or 6.8 µs to a high of 17,791 ns or 17.7 µs at jumbo size 64 Byte size frames for layer 2 traffic. Its average delay variation was ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, the Arista 7504’s average latency ranged from a low of 6,821 ns to a high of 12,387 ns or 12.3 µs at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 5 and 10.6 ns, providing consistent latency across all packet sizes at full line rate.
Arista 7504 RFC2544 Layer 2 Latency Test
Arista 7504 RFC2544 Layer 3 Latency Test
0
10000
20000
30000
40000
50000
60000
70000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 69420 15300 13880 15060 15340 12720 12060 12820 15160
Avg Latency 9994.88 6821.323 7858.688 8466.526 8647.958 8387.26 8112.948 8996.547 12387.99
Min Latency 3340 3160 3440 3380 3660 3740 3720 3980 6600
Avg. delay Variation 9 5 5.953 8 7 6 10 5 10.599
nsMax Latency
Avg Latency
Min Latency
0
20000
40000
60000
80000
100000
120000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 110120 19560 13800 16020 13540 12520 12040 12500 15160
Avg Latency 17791.203 6831.521 7850.714 8452.021 8635.505 8387.01 8122.078 8995.995 12392.828
Min Latency 3080 3100 3280 3420 3580 3640 3720 3960 6840
Avg. delay Variation 9 5 5.953 8 7 6 10 5 10.547
nsMax Latency
Avg Latency
Min Latency
Arista Networks 7504 Test ConfigurationHardware Software Version Port Density
Device under test7504 Modular Switch http://www.aristanetworks.com
EOS 4.5.5 192
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Video feature: Click to view Arista video podcast
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
24 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Arista 7504 demonstrated 100% throughput as a percentage of line rate across all 192 10GbE ports. In other words, not a single packet was dropped while the 7504 was presented with enough traffic to populate its 192 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
0%
20%
40%
60%
80%
100%
64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
Arista 7504 RFC 2544 L2 & L3 Throughput Test
Arista 7504 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 14208
East-West Server_to_Database 4394
East-West HTTP 9171
East-West iSCSI-Server_to_Storage 5656
East-West iSCSI-Storage_to_Server 13245
North-South Client_to_Server 5428
North-South Server_to_Client 4225
Arista 7504 Power Consumption Test
WattsATIS/10GbE port 10.31
3-Year Cost/WattsATIS/10GbE $37.69
Total power cost/3-Year $7,237.17
3 yr energy cost as a % of list price 3.14%
TEER Value 92
Cooling Front to Back
The Arista 7504 performed well under cloud simulation conditions by deliver-ing 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 14µs.
The Arista 7504 represents a new breed of cloud network spine switches with power efficiency being a core value. Its WattsATIS/port is 10.31 and TEER value is 92. Its power cost per 10GbE is es-timated at $12.56 per year. The three-year cost to power the 7504 is estimated at $7,237.17 and represents approxi-mately 3% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
25 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
IXIA and other test equipment cal-culates back pressure per RFC 2889 paragraph 5.5.5.2. which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7504‘s 2.3GB of packet buffer memory it can overload ports with more pack-ets than the MOL, therefore, the IXIA or any test equipment “calculates/sees” back pressure, but in reality this is an anomaly of the RFC testing method and not the 7504. The Arista 7504 can buffer up 40ms of traffic per port at 10GbE speeds which is 400K bits or 5,425 packets of 9216 bytes.
Arista 7500 switches do not use back pressure (PAUSE) to manage conges-tion, while other core switches in the industry, not tested here, transmit pause
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 83.34 83.342 83.345 83.35 83.362 83.362 83.362 83.362 83.362
Layer 3 Agg. Forwarding Rate 83.34 83.342 83.345 83.35 83.362 83.362 83.362 83.362 78.209
% Line RateLayer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no no no
L2 Back Pressure yes yes yes yes yes yes yes yes yes
Agg Flow Control Frames 0 0 0 0 0 0 0 0 0
L3 Head of Line Blocking no no no no no no no no yes
L3 Back Pressure yes yes yes yes yes yes yes yes no
Arista 7504 RFC 2889 Congestion Test
The Arista 7504 demonstrated nearly 84% of aggregated forwarding rate as percentage of line rate during conges-tion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. For L2 forwarding the 7504 did not use HOL blocking, which means that as the 7504-10GbE port became congested, it did not impact the performance of other ports. However, during L3 test-ing HOL blocking was detected at the 9216 packet size. Note the 7504 was in beta testing at the time of the Lippis/IXIA test and there was a “corner case” at 9216 bytes at L3 that needed further tuning. Arista states that its production code provides wirespeed L2/L3 perfor-mance at all packet sizes without any head of line blocking.
Note that while the Arista 7504 shows back pressure, in fact there is none.
frames under congestion. The Arista 7504 offers lossless forwarding for data-center applications due to its deep buf-fer VOQ architecture. With 2.3GB of packet buffer memory, Arista could not find any use case where the 7504 needs to transmit pause frames and hence that functionality is turned off.
The Arista 7504 does honor pause frames and stops transmitting, as many hosts and other switches still cannot support wirespeed 10G traffic. Again due to its deep buffers, the 7504 can buffer up to 40ms of traffic per port at 10GbE speeds, which is a necessary fac-tor for a lossless network.
Arista’s excellent congestion manage-ment results are a direct consequence of its generous buffer memory design as well as its VOQ architecture.
Arista 7504 RFC 3918 IP Multicast Test
At the time of testing in December 2010, the Arista 7504 did not support IP Mul-ticast. Both the 7504 and 7508 now do with EOS 4.8.2.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
26 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
In a compact 7 RU chassis, the Arista 7504 offers 5 Terabits of fabric capacity, 192 wire speed 10 GbE ports and 2.88 BPPS of L2/3 throughput. The Arista 7500 boasts a com-prehensive feature set, as well as future support for 40GbE and 100GbE standard without needing a fabric upgrade. According to Arista with front-to-rear airflow, redundant and hot swappable supervisor, power, fabric and cooling modules, the 7500 is energy efficient with typical power consumption of approximately 10 watts per port for a fully loaded chassis. In fact, the Lippis/Ixia test measures 10.31 WATTSATIS per port for the 7504 with a TEER value of 92. The Arista 7500 is a platform targeted for building low latency, scalable, data center networks.
Arista EOS is a modular switch operating system with a unique state sharing architecture that separates switch state from protocol processing and application logic. Built on top of a standard Linux kernel, all EOS processes run in their own protected memory space and exchange state through a memory-based database. This multi-process state sharing architec-ture provides the foundation for in-service-software updates and self-healing resiliency. Arista EOS automates the networking in-frastructure natively with VMware vSphere via its VM Tracer to provide VM discovery, auto-VLAN provisioning and visibility into the virtual computing environment. All Arista products, including the 7504 switch,
run Arista EOS software. The same binary image supports all Arista products.
For the Arista 7500 series, engineers have implemented an advanced VOQ architecture that eliminates HOL blocking with a dedicated queue on the ingress side for each output port and each priority level. This is verified in the RFC 2889 layer 2 and 3 congestion Lippis/Ixia test where there is no HOL blocking even when an egress 10GbE port is saturated with 150% of line rate traffic and in the RFC2544 latency Lippis/Ixia test where the average delay variation range is 5 to 10 ns across all packet sizes. As a result of VOQ, net-work traffic flows through the switch smoothly and without encountering additional congestion points. In addition, the Arista 7500 VOQ fabric is buffer-less. Packets are only stored once on ingress. This achieves one of the lowest store-and-forward latencies for any Core switch product, less than 10 microseconds for a 64B packet at layer 3.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
27 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Arista 7124SX 10G SFP Data Center Switch
For this Lippis/Ixia evaluation at iSim-City Arista submitted its newest addi-tion to the Arista 7100 Series Top-of-Rack (ToR) switches, the 7124SX. The 7124SX is a wire-speed 24-port cut-through switch in a 1RU (Rack Unit) form factor supporting 100Mb, 1GbE and 10GbE SPF+ optics and cables. It forwards packets at layer 2, and layer 3 in hardware. The 7124SX was built for data center and ultra low latency en-vironments where sub 500ns latency is required. The 7124SX runs Arista’s Extensible Operating System (EOS)™, which is common across the entire Arista family of switches in the same bi-nary EOS image and can be customized to customer needs such as financial ser-vices with access to Linux tools. With EOS the 7124SX possesses in-service-software-upgrade (ISSU), self-healing stateful fault repair (SFR) and APIs to customize the software.
The 7124SX supports Arista’s Latency Analyzer (LANZ), a new capability to monitor and analyze network perfor-mance, generate early warning of pend-ing congestion events, and track sourc-es of bottlenecks. In addition, a 50GB Solid State Disk (SSD) is available as a factory-option and can be used to store packet data captures on the switch, LANZ generated historical data, and take full advantage of the Linux based Arista EOS. Other EOS provisioning and monitoring modules include Zero Touch Provisioning (ZTP) and VM Tracer, which ease set-up and overall manageability of networks in virtual-ized data center environments.
Arista Networks 7124SX Test ConfigurationHardware Software Version Port Density
Device under test7124SX 10G SFP Switch http://www.aristanetworks.com
EOS-4.6.2 24
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Arista 7124SX RFC2544 Layer 2 Latency Test
Arista 7124SX RFC2544 Layer 3 Latency Test
0
100
200
300
400
500
600
700
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 600 640 640 640 640 640 640 640 640 640
Avg Latency 561.583 582.333 589.5 567.75 570.292 568.792 568.542 564.333 563.833 562.208
Min Latency 500 520 500 480 480 480 480 480 480 480
Avg. delay Variation 9.958 5.208 9.917 5.417 7.917 6.875 5.5 9.25 5.25 9.333
ns
Max Latency
Avg Latency
Min Latency
0
100
200
300
400
500
600
700
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 600 600 620 620 620 620 620 620 620 620
Avg Latency 541.083 548.208 553.292 527.917 528.958 531.583 531.417 531.042 531.417 532.792
Min Latency 480 480 460 460 460 460 460 460 460 460
Avg. delay Variation 8.417 5.958 9.667 5.542 7.667 7.833 5.917 9.542 6.667 9.667
ns
Max Latency
Avg Latency
Min Latency
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
28 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
For those in the financial services in-dustry, pay particular attention to pack-et size 192 Bytes, which typically is not covered by many tests but is the most common packet size in financial trad-ing environments.
The 7124SX ToR switch was tested across all 24 ports of 10GbE at layer 2
and 3 forwarding. The minimum layer 2 latency observed was 460 ns. Its aver-age layer 2 latency was measured from a low of 527 ns to a high of 553 ns, a very tight range across packet sizes from 64 to 9216 Bytes. Its average delay varia-tion ranged between 5.9 and 9.6 ns, providing consistent latency across all packet sizes at full line rate.
The minimum layer 3 latency measured was 480 ns. The average layer 3 latency of the 7124SX was measured from a low of 561 ns to a high of 589 ns, again a very tight range across packet sizes from 64 to 9216 Bytes. Its average delay variation ranged between 5.2 and 9.9 ns, providing consistent latency across all packet sizes at full line rate.
0%
20%
40%
60%
80%
100%
64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
Arista 7124SX RFC 2544 L2 & L3 Throughput Test
The Arista 7124SX demonstrated 100% throughput as a percentage of line rate across all 24 10GbE ports. In other words, not a single packet was dropped while the Arista 7124SX was presented with enough traffic to populate its 24 10GbE ports at line rate simultaneously for L2 and L3 traffic flows. The 7124SX is indeed a 24-port wire-speed switch.
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no
L2 Back Pressure yes yes yes yes yes
Agg Flow Control Frames 0 0 0 0 00
L3 Head of Line Blocking no no no no no
L3 Back Pressure yes yes yes yes yes
100
100
no
yes
0
no
yes
100
100
no
yes
0
no
yes
100
100
no
yes
0
no
yes
100
100
no
yes
0
no
yes
100
100
no
yes
0
no
yes
Arista 7124SX RFC 2889 Congestion Test
The Arista 7124SX demonstrated 100% of aggregated forwarding rate as percentage of line rate during conges-tion conditions. A single 10GbE port was flooded at 150% of line rate. The Arista 7124SX did not exhibit Head of Line (HOL) blocking problems, which means that as a 10GbE port on the Arista 7124SX became congested, it did not impact performance of other ports avoiding a cascading of dropped pack-ets and congestion.
Uniquely the Arista 7124SX did indi-cate to Ixia test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
29 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
the MOL, therefore, the Ixia or any test equipment “calculates/sees” back pres-sure, but in reality this is an anomaly of the RFC testing method and not the 7124SX.
The Arista 7124SX has a 2MB packet buffer pool and uses Dynamic Buffer Allocation (DBA) to manage conges-tion. Unlike other architectures that
test equipment calculate back pres-sure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmit-ted frames at MOL (Maximum Offered Load) rate then back pressure is pres-ent. Thanks to the 7124SX‘s generous and dynamic buffer allocation it can overload ports with more packets than
have a per-port fixed packet memory, the 7124SX can handle microbursts to allocate packet memory to the port that needs it. Such microbursts of traffic can be generated by RFC 2889 congestion tests when multiple traffic sources send traffic to the same destination for a short period of time. Using DBA, a port on the 7124SX can buffer up to 1.7MB of data in its transmit queue.
The Arista 7124SX performed well un-der cloud simulation conditions by de-livering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its la-tency stayed under 7685 ns even under congestion, measured in cut-through mode. Of special note is the low la-tency observed during north-south server-to-client and east-west server-to-database flows.
Arista 7124SX Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 7439
East-West Server_to_Database 692
East-West HTTP 4679
East-West iSCSI-Server_to_Storage 1256
East-West iSCSI-Storage_to_Server 7685
North-South Client_to_Server 1613
North-South Server_to_Client 530
500
520
540
560
580
600
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100 100
Agg. Average Latency 562.822 590.087 592.348 558.435 570.913 561.630 569.221 569.725 569.174 565.652
nsAgg Tput Rate (%)
Arista 7124SX RFC 3918 IP Multicast Test
The Arista 7124SX demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 562 ns at 64Byte size packets to 565 ns at 9216 jumbo frame size packets, the low-est IP multicast latency measured thus far in the Lippis/Ixia set of tests. Fur-ther, the Arista 7124SX demonstrated consistently low IP Multicast latency.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
30 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Arista 7124SX Power Consumption Test
WattsATIS/10GbE port 6.76
3-Year Cost/WattsATIS/10GbE $24.71
Total power cost/3-Year $593.15
3 yr energy cost as a % of list price 4.56%
TEER Value 140
CoolingFront-to-Back, Back-to-Front
The Arista 7124SX represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is a low 6.76 and TEER value is 140. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. The Arista 7124SX consumes slightly more power than a typical 100 Watt light bulb. Its power cost per 10GbE is estimated at $8.24 per year. The three-year cost to power the Arista 7124SX is estimated at $593.15 and represents approximately 4.56% of its list price. Keeping with data center best practices, its cooling fans flow air rear to front and are reversible.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
31 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion
The Arista 7124SX offers 24-wire speed 10GbE ports of layer 2 and 3 cut-through forwarding in a 1 RU footprint. Designed with performance in mind, the Arista 7124SX provides ultra low, 500ns latency, line-rate forwarding and unique dynamic buffer allocation to keep traffic moving un-der congestion and microburst conditions. These claims was verified in the Lippis/Ixia RFC2544 latency test where aver-age latency across all packet sizes was measured between 527 to 589 ns for layer 2 and 3 forwarding. Of special note is the 7124SX’s minimum latency of 460ns across a large range of packet sizes. Further, there was mini-mal delay variation, between 5 and 9 ns, across all packet sizes, demonstrating the switch architecture’s ability to forward packets consistently and quickly boosting confidence that the Arista 7124SX will perform well in converged I/O configurations and consistently in demanding data center environments. In addition, the Arista 7124SX demonstrated 100% throughput at all packet sizes. Redundant power and fans along with various EOS high availability features ensure that the Arista 7124SX is always available and reliable for data center op-erations. The 7124SX is well suited for low latency applica-tions such as financial trading, Message Passing Interface or MPI jobs, storage interconnect, virtualized environment and other 10GbE data center deployments
The 7124SX is a single chip 10GbE product, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The 7124SX is just one of the ToR switches in the Arista 7100 Series of fixed configuration ToR switches. In addition Arista also of-fers the 7148SX with low latency and low jitter over 48 ports and 7050S-64 ToR switch for denser 10GbE server connec-tions that require 40GbE uplinks. These Arista ToR switch products connect into Arista’s 7500 Series of modular core
switches, which provide industry-leading density of 192 to 384 10GbE con-nections. Across all Arista switch provides is a single
binary image of its EOS operation system allowing all
switches to utilize and employ its high availability features as well as
unique operational modules including LANZ, VM Tracer, vEOS and ZTP.
The combination of Arista’s ToR and Core switches enable data center architects to build a flat, non blocking two-tier network with high 10GbE density, low latency, low power consumption and high availability. Arista’s two-tier designs can scale to 18,000 nodes using standard Layer 2/Layer 3 protocols. All Arista ToR and Core switches support 32 port Multi-Chassis LAG or Link Aggregation Group which enable high levels of switch interconnect to be built creating large scale out cloud computing facilities.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
32 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Arista Networks 7050S-64 Switch
For this Lippis/Ixia evaluation at iSim-City Arista submitted its highest den-sity and newest Top-of-Rack (ToR) switch; the 7050S-64. The 7050S-64 is a wire speed layer 2/3/4 switch with 48 10GbE SFP+ and 4-40GbE QSFP+ ports in a 1RU form factor. Each 40GbE port can also operate as four indepen-dent 10GbE ports to provide a total of 64 10GbE ports. The Arista 7050S-64 forwards in cut-through or store and forward mode and boast a shared 9 MB packet buffer pool allocated dynami-cally to ports that are congested. The 7050S-64 runs Arista’s Extensible Op-erating System (EOS)™, which is com-mon across the entire Arista family of switches in the same binary EOS image. With EOS the 7050S-64 possesses in-service-software-upgrade (ISSU), self-healing stateful fault repair (SFR) and APIs to customize the software.
The 7050S-64 supports Zero Touch Provisioning and VMTracer to au-tomate provisioning and monitoring of the network. In addition, a 50GB Solid State Disk (SSD) is available as a factory-option and can be used to store packet data captures on the switch, sys-logs over the life of the switch, and take full advantage of the Linux based Arista EOS.
For those in the financial services in-dustry, pay particular attention to pack-et size 192 Bytes, which typically is not covered by many tests but is the most common packet size in financial trad-ing environments.
Arista Networks 7050S-64 Test ConfigurationHardware Software Version Port Density
Device under test7050S-64 Switch http://www.aristanetworks.com
EOS-4.6.2 64
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Arista 7050S-64 RFC2544 Layer 2 Latency Test
Arista 7050S-64 RFC2544 Layer 3 Latency Test
0
200
400
600
800
1000
1200
1400
1600
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 1180 1220 1320 1320 1460 1520 1520 1520 1520 1520
Avg Latency 905.422 951.203 1038.422 1039.141 1191.641 1233 1232.453 1232.484 1229.922 1225.578
Min Latency 720 780 860 880 1040 1060 1080 1080 1080 1060
Avg. delay Variation 9.672 5.391 9.641 5.641 7.781 6.75 5.578 9.578 5.328 9.859
ns
Max Latency
Avg Latency
Min Latency
0
200
400
600
800
1000
1200
1400
1600
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 1040 1060 1260 1260 1400 1440 1440 1440 1440 1440
Avg Latency 914.328 926.766 1110.813 1110.641 1264.906 1308.844 1309.109 1310.766 1309.766 1305.5
Min Latency 700 780 980 980 1140 1180 1180 1180 1180 1160
Avg. delay Variation 8.672 5.266 9.594 5.406 7.797 6.922 5.516 9.734 5.234 9.75
ns
Max Latency
Avg Latency
Min Latency
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
33 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The 7050S-64 ToR switch was tested in its 64-10GbE-port configuration using its 4-40GbE ports as 16-10GbE ports. It was tested for both layer 2 and 3 for-warding at 10GbE. The minimum layer 2 latency was 700 ns. The average layer 2 latency was measured from a low of 914 ns to a high of 1310 ns, across pack-et sizes from 64 to 9216 Bytes. Its aver-age delay variation ranged between 5.2
and 9.7 ns, providing consistent latency across all packet sizes at full line rate.
The 7050S-64 minimum layer 3 latency was 720 ns. The 7050-64S layer 3 laten-cy was measured from a low of 905 ns to a high of 1,233 ns, again across packet sizes from 64 to 9216 Bytes again across packet sizes from 64 to 9216 Bytes. Its average delay variation ranged between
5.3 and 9.8 ns, providing consistent la-tency across all packet sizes at full line rate.
The Arista 7050S-64 switch has the low-est latency amongst all 64 port switches tested so far in the Lippis/Ixia set of tests.
0%
20%
40%
60%
80%
100%
64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
Arista 7050S-64 RFC 2544 L2 & L3 Throughput Test
The Arista 7050S-64 demonstrated 100% throughput as a percentage of line rate across all 64 10GbE ports. In other words, not a single packet was dropped while the Arista 7050S-64 was presented with enough traffic to populate its 64 10GbE ports at line rate simultaneously for L2 and L3 traffic flows. The 7050S-64 is indeed a 64-port wire-speed switch.
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no no no
L2 Back Pressure yes yes yes yes yes yes yes yes yes
Agg Flow Control Frames 0 0 0 0 0 0 0 0 0
L3 Head of Line Blocking no no no no no no no no no
L3 Back Pressure yes yes yes yes yes yes yes yes yes
no
yes
0
no
yes
Arista 7050S-64 RFC 2889 Congestion Test
The Arista 7050S-64 demonstrated 100% of aggregated forwarding rate as percentage of line rate during conges-tion conditions. A single 10GbE port was flooded at 150% of line rate. The Arista 7050S-64 did not exhibit Head of Line (HOL) blocking problems, which means that as a 10GbE port on the Arista 7050S-64 became congested it did not impact performance of other ports preventing a cascading of packet loss.
Uniquely the Arista 7050S-64 did in-dicate to Ixia test gear that it was us-ing back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
34 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
equipment “calculates/sees” back pres-sure, but in reality this is an anomaly of the RFC testing method and not the 7050S-64.
The Arista 7050S-64 is designed with a dynamic buffer pool allocation such that during a microburst of traffic as during RFC 2889 congestion test when multiple traffic sources are destined to the same port, packets are buffered in
test equipment calculate back pres-sure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmit-ted frames at MOL (Maximum Offered Load) rate then back pressure is pres-ent. Thanks to the 7050S-64 ‘s generous and dynamic buffer allocation it can overload ports with more packets than the MOL, therefore, the Ixia or any test
packet memory. Unlike other architec-tures that have fixed per-port packet memory, the 7050S-64 uses Dynamic Buffer Allocation (DBA) to allocate packet memory to ports that need it. Under congestion, packets are buffered in shared packet memory of 9 MBytes. The 7050S-64 uses DBA to allocate up to 5MB of packet memory to a single port for lossless forwarding as observed during this RFC 2889 congestion test.
The Arista 7050S-64 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its la-tency stayed under 8,173 ns even under congestion, measured in cut-through mode. Of special note is the low la-tency observed during north-south server-to-client and east-west server-to-database flows.
Arista 7050S-64 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 8173
East-West Server_to_Database 948
East-West HTTP 4609
East-West iSCSI-Server_to_Storage 1750
East-West iSCSI-Storage_to_Server 8101
North-South Client_to_Server 1956
North-South Server_to_Client 1143
800
900
1000
1100
1200
1300
1400
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100 100
Agg. Average Latency 894.90 951.95 1069.35 1072.35 1232.06 1272.03 1276.35 1261.49 1272.46 1300.35
nsAgg Tput Rate (%)
Arista 7050S-64 RFC 3918 IP Multicast Test
The Arista 7050S-64 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 894 ns at 64Byte size packets to 1,300 ns at 9216 jumbo frame size packets, the lowest IP multicast latency measured thus far in the Lippis/Ixia set of test.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
35 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Arista 7050S-64 Power Consumption Test
WattsATIS/10GbE port 2.34
3-Year Cost/WattsATIS/10GbE $8.56
Total power cost/3-Year $547.20
3 yr energy cost as a % of list price 1.82%
TEER Value 404
CoolingFront-to-Back, Back-to-Front
The Arista 7050S-64 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is a very low 2.34 and TEER value is 404, the lowest power consumption we have measured thus far across all vendors. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consump-tion. The Arista 7050S-64 consumes slightly more power than a typical 100-Watt light bulb. Its power cost per 10GbE is estimated at $2.85 per year. The three-year cost to power the Arista 7050S-64 is estimated at $547.20 and represents approximately 1.82% of its list price. Keeping with data center best practices, Arista provides both front to rear or rear to front airflow options, al-lowing the switch to be mounted as ca-bling needs dictate. Arista has taken the added step of color coding the fan and power supply handles, red for hot isle and blue for cold isle; a very nice and unique industrial design detail.
In Arista’s product literature it states that typical power consumption of less than 2 Watt/port with twinax copper cables, and less than 3 Watt/port with SFP/QSFP lasers, the 7050S-64 pro-vides industry leading power efficiency for the data center. We concur with this claim and its power consumption values.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
36 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion
The Arista 7050S-64 offers 64-wire speed 10GbE ports of layer 2 and 3 cut-through or store and forward mode for-warding in a 1 RU footprint. It can also be configured with 48 ports of 10GbE and 4-40GbE uplink ports. Designed with performance, low power and forward migration to 40GbE in mind, the Arista 7050S-64 provides low latency, line-rate forwarding and unique dynamic buffer alloca-tion to keep traffic moving under congestion and micro-burst conditions. These claims were verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 905 to 1,310 ns for layer 2 and 3 forwarding. Of all 64 10GbE port switches tested so far, the Arista 75050S-64 offers the lowest latency. Further, there was minimal delay variation, between 5 and 9 ns, across all packet sizes, demonstrating the switch architecture’s ability to forward packets consistently and quickly boosting confidence that the Arista 7050S-64 will perform well in converged I/O configura-tions and consistently in demanding data center environ-ments. In addition, the Arista 7050S-64 demonstrated 100% throughput at all packet sizes. Redundant power and fans along with various EOS high availability features ensure that the Arista 7050S-64 is always available and reliable for data center operations.
The 7050S-64 is a single chip 10GbE product, which pro-vides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The 7050S-64 is but one ToR switch in the Arista fixed
configuration ToR products. In addition Arista also offers the 7100 ToR switches for ultra-low latency requirements. These Arista ToR switch products connect into Arista’s 7500 Series of modular core switches, which provide industry-leading density of 192 to 384 10GbE connections. Across all Arista switches is a single binary image of its EOS operation system allowing all switches to utilize and employ its high availability features as well as unique operational modules including VM Tracer, vEOS and ZTP.
The combination of Arista’s ToR and Core switches enable data center architects to build a flat, non blocking two-tier
network with high 10GbE density, low latency, low power consump-tion and high avail-ability. All Arista ToR and Core switches
support 32 link Multi-Chassis LAG or Link Ag-
gregation Group which enable high levels of switch interconnect
to be built creating large scale out cloud computing facili-ties. The 7050S-64 is well suited as a ToR switch for high-density server connections where it would connect into Arista 7500 Core/Spine switches. This configuration scales up to 18,000 10GbE connections in a non-blocking two-tier architecture. Alternatively, for 1,500 to 3,000 10GbE con-nections the 7050S-64 can be both ToR/leaf and Core/Spine.
The 7050S-64 is well suited for low latency applications, storage interconnect, virtualized environment and other 10GbE data center deployments. Where high performance, low latency and low power consumption are all high prior-ity attributes, the 7050S-64 should do very well.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
37 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124
BLADE Network Technologies is an IBM company; referred here forward as BLADE. BLADE submitted two ToR switches for evaluation in the Lippis/Ixia test at iSimCity. Both products are based on a single chip 10GbE net-work design, which provides consistent throughput and latency over all packet ranges. Each product is profiled here along with their resulting test data.
The IBM BNT RackSwitch G8124 is a 24-port 10GbE switch, specifically de-signed for the data center, providing a virtual, cooler and easier network so-lution, according to BLADE. The IBM BNT RackSwitch G8124 is equipped with enhanced processing and memory to support large-scale aggregation layer 10GbE switching environments and applications, such as high frequency fi-nancial trading and IPTV.
The IBM BNT RackSwitch G8124 ToR switch was tested across all 24 ports of 10GbE. Its average latency ranged from a low of 651 ns to a high of 709 ns for layer 2 traffic. Its average delay variation ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate.
IBM BNT RackSwitch G8124* RFC2544 Layer 2 Latency Test
IBM BNT RackSwitch G8124* RFC2544 Layer 3 Latency Test
500
600
700
800
900
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 760 760 800 800 800 800 800 800 800
Avg Latency 642.708 659.667 701.292 699.792 699 699.917 698.5 698.333 696.917
Min Latency 600 600 640 660 660 660 660 660 660
Avg. delay Variation 8.75 5.667 5.458 8.083 7.292 7.833 10.375 5.125 11.083
ns
Max LatencyAvg LatencyMin Latency
500
550
600
650
700
750
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 700 700 740 740 740 740 740 740 740
Avg Latency 651.75 671.5 709.458 709.042 708.25 709.708 707.917 708.042 706.625
Min Latency 600 620 660 660 660 660 660 660 660
Avg. delay Variation 9.25 5.083 5.417 7.875 7.167 5.25 10.125 5.167 10.125
ns
Max Latency
Avg Latency
Min Latency
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124 Test Configuration
Hardware Software Version Port Density
Device under testIBM BNT RackSwitch G8124 http://www.bladenetwork.net/RackSwitch-G8124.html
6.5.2.3 24
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Video feature: Click to view IBM BNT RackSwitch G8124 video podcast
* The IBM BLADE switches were tested differently than all other switches. The IBM BNT RackSwitch G8264 and G8124 were configured and tested via cut-through test method, while all other switches were configured and tested via store-and-for-ward method. During store-and-forward testing, test equipment subtract packet transmission latency, decreasing actual latency measurements by the time it takes to transmit a packet from input to output port. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
38 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
For 64Byte size packet, the Ixia test gear was configured for a staggered start.
For layer 3 traffic, the IBM BNT RackSwitch G8124’s average latency ranged from a low of 642 ns to a high of 701 ns across all frame sizes. Its average delay variation for layer 3 traffic ranged between 5 ns and 11 ns, providing consistent latency across all packet sizes at full line rate.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0
20
40
60
80
100
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
% Line Rate
Layer 2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
Head of Line Blocking no no no no no no no no no
Back Pressure yes yes yes yes yes yes yes yes yes
L2 Agg Flow Control Frames 8202900 6538704 4531140 4303512 5108166 9139344 8690688 8958128 9022618
L3 Agg Flow Control Frames 8291004 6580984 4531070 4315590 4398264 4661968 4433112 4630880 4298994
IBM BNT RackSwitch G8124 RFC 2544 L2 & L3 Throughput Test
IBM BNT RackSwitch G8124 RFC 2889 Congestion Test
The IBM BNT RackSwitch G8124 dem-onstrated 100% throughput as a per-centage of line rate across all 24 10GbE ports. In other words, not a single pack-et was dropped while the IBM BNT RackSwitch G8124 was presented with enough traffic to populate its 24 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
The IBM BNT RackSwitch G8124 dem-onstrated 100% of aggregated forward-ing rate as percentage of line rate dur-ing congestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. The IBM BNT RackSwitch G8124 did not exhibit Head of Line (HOL) block-ing problems, which means that as the IBM BNT RackSwitch G8124-10GbE port became congested, it did not im-pact performance of other ports. As with most ToR switches, the IBM BNT RackSwitch G8124 did use back pres-sure as the Ixia test gear detected flow control frames.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
39 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The IBM BNT RackSwitch G8124 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 650 ns at 64Byte size packet to 715 ns at 512 Byte size packets.
600
620
640
660
680
700
720
740
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100
Agg. Average Latency 650 677 713 715 713 725 707 713 712
nsAgg Tput Rate (%)
IBM BNT RackSwitch G8124 RFC 3918 IP Multicast Test
The IBM BNT RackSwitch G8124 per-formed well under cloud simulation conditions by delivering 100% aggre-gated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 581 ns. Of special note is the consistency in latency across all traffic profiles observed during this test of the IBM BNT RackSwitch G8124.
The IBM BNT RackSwitch G8124 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 5.5 and TEER value is 172. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $6.70 per year. The three-year cost to power the IBM BNT RackSwitch G8124 is estimated at $482.59 and represents approximately 4% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
IBM BNT RackSwitch G8124 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 613
East-West Server_to_Database 598
East-West HTTP 643
East-West iSCSI-Server_to_Storage 600
East-West iSCSI-Storage_to_Server 619
North-South Client_to_Server 635
North-South Server_to_Client 581
IBM BNT RackSwitch G8124 Power Consumption Test
WattsATIS/10GbE port 5.5
3-Year Cost/WattsATIS/10GbE $20.11
Total power cost/3-Year $482.59
3 yr energy cost as a % of list price 4.04%
TEER Value 172
Cooling Front to Back
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
40 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
The IBM BNT RackSwitch G8124 offers 24-wire speed 10GbE ports in a 1 RU footprint. Designed with perfor-mance in mind, the IBM BNT RackSwitch G8124 provides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data, and large data-center grade buffers to keep traffic moving. This claim was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 651 to 709 ns for layer 2 and 642 to 701 ns for layer 3 forwarding. Fur-ther, there was minimal delay variation, between 5 and 10 ns, across all packet sizes, demonstrating the switch architec-ture’s ability to forward packets consistently and quickly. In addition, the IBM BNT RackSwitch G8124 demonstrated 100% throughput at all packet sizes indepen-dent upon layer 2 or layer 3 forwarding.
Redundant power and fans along with various high availability features ensure that the IBM BNT RackSwitch G8124 is always available for business-sensitive traffic. The low latency offered by the IBM BNT RackSwitch G8124 makes it ideal for latency sensitive applications, such as high performance computing clusters and financial applications. Further, the IBM BNT RackSwitch G8124 supports the newest data center protocols including CEE for support of FCoE which was not evaluated in the Lippis/Ixia test.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
41 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8264BLADE Network Technologies, an IBM Company
IBM BNT G8264 Test ConfigurationHardware Software Version Port Density
Device under testIBM BNT RackSwitch G8264 http://www.bladenetwork.net/RackSwitch-G8264.html
6.4.1.0 64
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
600
800
1000
1200
1400
1600
1800
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 1240 1300 1380 1540 1580 1580 1580 1580 1580
Avg Latency 1075.391 1121.016 1213.547 1370.938 1421.938 1422.625 1421.828 1419.734 1415.781
Min Latency 940 980 1020 1240 1260 1260 1260 1280 1280
Avg. delay Variation 8.609 5.172 5.641 7.672 6.734 6.25 9.422 5.078 9.578
ns
Max Latency
Avg Latency
Min Latency
600
800
1000
1200
1400
1600
1800
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 1240 1300 1380 1540 1580 1580 1580 1580 1580
Avg Latency 1081.344 1116.828 1214.922 1374.25 1422 1421.281 1421.75 1419.891 1415.359
Min Latency 940 980 1020 1240 1260 1260 1260 1280 1280
Avg. delay Variation 8.688 5.219 5.609 7.609 6.922 7.563 9.641 5.047 9.563
ns
Max Latency
Avg Latency
Min Latency
The IBM BNT RackSwitch G8264 is a wire speed 10 and 40GbE switch specifically designed for the data center. The IBM BNT RackSwitch G8264 is offered in two configurations: either as a 64-port wire speed 10GbE ToR switch or 48-port wire speed 10GbE with four-wire speed 40GbE ports. The total bandwidth of the IBM BNT RackSwitch G8264 is 1.2 Terabits per second packaged in a 1 RU footprint. For the Lippis/Ixia test, the IBM BNT RackSwitch G8264 was configured as a 64-port wire speed 10GbE ToR switch.
The IBM BNT RackSwitch G8264 ToR switch was tested across all 64 ports of 10GbE. Its average latency ranged from a low of 1081 ns to a high of 1421 ns for layer 2 traffic. Its average delay variation ranged between 5 and 9.5 ns, providing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, the IBM BNT Rack-Switch G8264’s average latency ranged from a low of 1075 ns to a high of 1422 ns across all frame sizes. Its average de-lay variation for layer 3 traffic ranged between 5 and 9.5 ns, providing consis-tent latency across all packet sizes at full line rate.
IBM BNT RackSwitch G8264* RFC2544 Layer 2 Latency Test
IBM BNT RackSwitch G8264* RFC2544 Layer 3 Latency Test
* The IBM BLADE switches were tested differently than all other switches. The IBM BNT RackSwitch G8264 and G8124 were configured and tested via cut-through test method, while all other switches were configured and tested via store-and-for-ward method. During store-and-forward testing, test equipment subtract packet transmission latency, decreasing actual latency measurements by the time it takes to transmit a packet from input to output port. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
42 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The IBM BNT RackSwitch G8264 dem-onstrated 100% throughput as a per-centage of line rate across all 64-10GbE ports. In other words, not a single pack-et was dropped while the IBM BNT RackSwitch G8264 was presented with enough traffic to populate its 64 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0
20
40
60
80
100
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
% Line Rate Layer 2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
Head of Line Blocking no no no no no no no no noBack Pressure yes yes yes yes yes yes yes yes yes
L2 Agg Flow Control Frames 7498456 5582384 6014492 5352004 3751186 514182 844520 903536 9897178
L3 Agg Flow Control Frames 7486426 5583128 6008396 5351836 5348868 5096720 4982224 4674228 3469558
IBM BNT RackSwitch G8264 RFC 2544 L2 & L3 Throughput Test
IBM BNT RackSwitch G8264 RFC 2889 Congestion Test
The IBM BNT RackSwitch G8264 dem-onstrated 100% of aggregated forward-ing rate as percentage of line rate dur-ing congestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. The IBM BNT RackSwitch G8124 did not exhibit Head of Line (HOL) block-ing problems, which means that as the IBM BNT G8264 10GbE port became congested, it did not impact the perfor-mance of other ports. As with most ToR switches, the IBM BNT RackSwitch G8264 did use back pressure as the Ixia test gear detected flow control frames.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
43 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The IBM BNT RackSwitch G8264 dem-onstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 1059 ns at 64Byte size packet to 1348 ns at 1024 and 1280 Byte size packets.
0
200
400
600
800
1000
1200
1400
1600
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100
Agg. Average Latency 1059 1108 1139 1305 1348 1348 1332 1346 1342
nsAgg Tput Rate (%)
IBM BNT RackSwitch G8264 RFC 3918 IP Multicast Test
The IBM BNT RackSwitch G8264 per-formed well under cloud simulation conditions by delivering 100% aggre-gated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 1017 ns. Of special note is the consistency in latency across all traffic profiles observed during this test of the IBM BNT RackSwitch G8264.
The IBM BNT RackSwitch G8264 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 3.92 and TEER value is 241. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $4.78 per year. The three-year cost to power the IBM BNT RackSwitch G8264 is estimated at $917.22 and represents approximately 4% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
IBM BNT RackSwitch G8264 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 1060
East-West Server_to_Database 1026
East-West HTTP 1063
East-West iSCSI-Server_to_Storage 1026
East-West iSCSI-Storage_to_Server 1056
North-South Client_to_Server 1058
North-South Server_to_Client 1017
IBM BNT RackSwitch G8264 Power Consumption Test
WattsATIS/10GbE port 3.92
3-Year Cost/WattsATIS/10GbE $14.33
Total power cost/3-Year $917.22
3 yr energy cost as a % of list price 4.08%
TEER Value 241
Cooling Front to Back
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
44 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
The IBM BNT RackSwitch G8264 was designed with top performance in mind as it provides line-rate, high-band-width switching, filtering, and traffic queuing with low consistent latency. This claim was verified in the Lippis/Ixia RFC2544 latency test, where average latency across all pack-et sizes was measured between 1081 to 1422 ns for layer 2 and 1075 to 1422 ns for layer 3 forwarding. Further, there was minimal delay variation, between 5 and 9 ns, across all packet sizes for layer 2 and 3 forwarding, demonstrating the switch’s architecture to forward packets at nanosecond speeds consistently under load. In addition, the IBM BNT RackSwitch G8264 demonstrated 100% throughput at all packet sizes independent upon layer 2 or layer 3 forwarding.
The low latency offered by the IBM BNT RackSwitch G8264 makes it ideal for latency sensitive applications, such as high performance computing clusters and financial applications. In fact, the IBM BNT RackSwitch G8264
demonstrated 100% aggregated throughput at all packet sizes with latency measured in the 1059 to 1348 ns range during the Lippis/Ixia RFC3918 IP multicast test.
Nanosecond latency offered by the IBM BNT RackSwitch G8264 makes it ideal for latency sensitive applications, such as high performance computing clusters and financial applications. Further, the IBM BNT RackSwitch G8264 supports the newest data center protocols including CEE for support of FCoE which was not evaluated in the Lippis/Ixia test.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
45 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Brocade VDXTM 6720-24
For this Lippis/Ixia evaluation at iSim-City Brocade submitted the industry’s first fabric enabled VDX™ 6720-24 Top-of-Rack (ToR) switch that’s built with Brocade’s Virtual Cluster Switch-ing (VCS™) technology, an Ethernet fabric innovation that addresses the unique requirements of evolving data center environments. The Brocade VDX™ 6720-24 is a 24 port 10 Gigabit Ethernet (10GbE) ToR switch. Brocade® VDX™ 6720 Data Center Switches, are 10 GbE line-rate, low-latency, lossless switches in 24-port (1U) and 60-port (2U) models. The Brocade VDX™ 6720 series switch is based on sixth-genera-tion ASIC fabric switching technology and runs the feature dense Brocade Network Operating System (NOS).
The VDX™ 6720-24 ToR switch was tested across all 24 ports of 10GbE. Its average latency ranged from a low of 510 ns to a high of 619 ns for layer 2 traffic. Its average delay variation ranged between 5 and 8 ns, providing consistent latency across all packet sizes at full line rate.
Brocade VDXTM 6720-24 Test ConfigurationHardware Software Version Port Density
Device under testVDXTM 6720-24 http://www.brocade.com/
2.0.0a 24
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Brocade VDXTM 6720-24 RFC2544 Layer 2 Latency Test
Brocade VDXTM 6720-24 RFC2544 Layer 3 Latency Test
The Brocade VDX 6720-24 is a layer 2 switch and thus did not participate in layer 3 test.
0
100
200
300
400
500
600
700
800
64 128 256 512 1024 1280 1518 2176 9208
Max Latency 760 640 660 680 680 680 680 680 680
Avg Latency 527.042 510.708 570.083 619.458 615.542 615.583 614.208 612.167 610.958
Min Latency 440 420 480 560 560 560 560 560 560
Avg. delay Variation 8.75 5.375 5.875 7.75 6.375 8.25 8.75 5.167 6.542
ns
Max Latency
Avg Latency
Min Latency
note max frame size is 9208 so could not do 9216
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
46 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Brocade VDX™ 6720-24 demon-strated 100% throughput as a percent-age of line rate across all 24 10GbE ports. In other words, not a single packet was dropped while the Brocade VDX™ 6720-24 was presented with enough traffic to populate its 24 10GbE ports at line rate simultaneously for L2 traffic flows.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176 9208
Layer 2 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9208
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no no no
Agg Flow Control Frames 6281422 7108719 6295134 5677393 5373794 5206676 5163746 4943877 4317694
L2 Back Pressure yes yes yes yes yes yes yes yes yes
Brocade VDXTM 6720-24 RFC 2544 L2 Throughput Test
Brocade VDXTM 6720-24 RFC 2889 Congestion Test
The Brocade VDX™ 6720-24 demon-strated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was flooded at 150% of line rate. The Brocade VDX™ 6720-24 did not exhibit Head of Line (HOL) blocking problems, which means that as the Bro-cade VDX™ 6720-24 port became con-gested, it did not impact performance of other ports. As with most ToR switches, the Brocade VDX™ 6720-24 uses back pressure as the Ixia test gear detected flow control frames.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
47 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Brocade VDX™ 6720-24 performed well under cloud simulation condi-tions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed un-der 7901 ns, measured in cut-through mode. Of special note is the low la-tency of 572 ns and 675 ns observed during north-south server-to-client and east-west server-to-database flows respectively.
The Brocade VDX™ 6720-24 represents a new class of Ethernet fabric enabled fixed port switches with power efficien-cy being a core value. Its WattsATIS/port is a very low 3.05 and TEER value is 310. Note TEER values represent the ratio of work performed over energy consumption. Hence, higher TEER val-ues are more desirable. Its power cost per 10GbE is estimated at $3.72 per year. The three-year cost to power the Brocade VDX™ 6720-24 is estimated at $267.62 and represents approximately 2.24% of its list price. Keeping with data center best practices, its cooling fans flow air rear to front. There is also a front to back airflow option available to meet all customer cooling require-ments.
Brocade VDXTM 6720-24 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 7653
East-West Server_to_Database 675
East-West HTTP 4805
East-West iSCSI-Server_to_Storage 1298
East-West iSCSI-Storage_to_Server 7901
North-South Client_to_Server 1666
North-South Server_to_Client 572
Brocade VDXTM 6720-24 Power Consumption Test
WattsATIS/10GbE port 3.05
3-Year Cost/WattsATIS/10GbE $11.15
Total power cost/3-Year $267.62
3 yr energy cost as a % of list price 2.24%
TEER Value 310
Cooling Front to Back
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
48 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
New Ethernet fabric based architectures are emerging to support highly virtualized data center environments and the building of cloud infrastructures allowing IT organiza-tions to be more agile and customer responsive. Ethernet fabrics will transition the data center networking from the inefficient and cumbersome Spanning Tree Protocol (STP) networks, to high performance, flat, layer 2 networks built for the east-west traffic demanded by server virtualization and highly clustered applications.
The Brocade VDX™ 6720-24 with VCS technology offers 24-wire speed 10GbE ports of layer 2 cut-through forwarding in a 1 RU footprint. Designed with performance in mind, the Brocade VDX™ 6720-24 provides line-rate, high-band-width switching, filtering and traffic queuing without delaying data, and large data-center grade buffers to keep traffic moving. This claim was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 510 to 619 ns for layer 2 forwarding. Further, there was minimal delay variation, between 5 and 8 ns, across all packet sizes, demonstrating the switch archi-tecture’s ability to forward packets consistently and quickly boosting confidence that the Brocade VDX™ 6720-24 will perform well in converged I/O configurations. In addition, the Brocade VDX™ 6720-24 demonstrated 100% throughput at all packet sizes. Redundant power and fans along with
various high availability features ensure that the Brocade VDX™ 6720-24 is always available and reliable for data cen-ter operations.
The VDX™ 6720-24 is build on innovative Brocade designed ASIC, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The VDX™ 6720-24 is the entry level of the VDX family, which can be configured as a single logical chassis and scales to support 16-, 24-, 40-, 50-, and 60-ports of 10 GbE.
Brocade VCS capabilities allow customers to start from small two-switch ToR deployments, such as the VDX™
6720-24 tested here, and scale to large virtualization-optimized Eth-
ernet fabrics with tens of thousands of ports overtime. Further-more, Brocade claims
that VCS-based Ethernet fabrics are convergence ready, al-lowing customers to deploy one network for storage—Fibre Channel over Ethernet (FCoE), iSCSI, NAS—and IP data traffic.
Brocade VCS technology boasts lower capital cost by col-lapsing access and aggregation layers of a three-tier network into two and thus reducing operating cost through man-aging fewer devices. Customers will be able to seamlessly transition their existing multi-tier hierarchical networks to a much simpler, virtualized, and converged data center network—moving at their own pace, according to Brocade.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
49 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Brocade VDXTM 6730-32
For this Lippis/Ixia evaluation at iSim-City Brocade submitted the industry’s first fabric enabled VDX™ 6730-32 Top-of-Rack (ToR) switch that’s built with Brocade’s Virtual Cluster Switching (VCS™) technology, an Ethernet fabric innovation that addresses the unique requirements of evolving data center environments. The VDX™ 6730-32 is a fixed configuration ToR convergence switch supporting 8-8 Gbp native Fibre Channel ports plus 24-10GbE ports, the perfect switch for consolidated I/O applications. In addition the VDX™ 6730-32 supports ethernet based stor-age connectivity for Fibre Channel over Ethernet (FCoE), iSCSI and NAS; pro-tecting investment in Fibre Channel Storage Area Networks and Ethernet fabrics.
The Brocade VDX™ 6730 is available in two models—the 2U Brocade VDX 6730-76 with 60 10 GbE LAN ports and 16 8-Gbps native Fibre Channel ports, and the 1U Brocade VDX 6730-32 with 24 10 GbE LAN ports and eight 8 Gbps native Fibre Channel ports.
Brocade VDXTM 6730-32 Test Configuration*Hardware Software Version Port Density
Device under testVDXTM 6730-32 http://www.brocade.com/
Brocade NOS 2.1 24
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.10 EA SP2 IxNetwork 6.10 EA
Xcellon Flex AP10G16S (16 port 10G module)
Xcellon Flex Combo 10/40GE AP (16 port 10G / 4 port 40G)
Xcellon Flex 4x40GEQSFP+ (4 port 40G)
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010
Siemon Moray Low Power Active Optical Cable Assemblies Single Mode, QSFP+ 40GbE optical cable QSFP30-03 7 meterswww.siemon.com
Brocade VDXTM 6730-32 RFC2544 Layer 2 Latency Test
Brocade VDXTM 6720-24 RFC2544 Layer 3 Latency Test
The Brocade VDX 6730-32 is a layer 2 switch and thus did not participate in layer 3 test.
0
100
200
300
400
500
600
700
800
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency 760 660 680 720 740 740 720 740 740 740
Avg Latency 521.17 530.04 540.75 577.92 606.50 605.17 604.92 607.75 606.42 629.00
Min Latency 440 460 480 500 580 580 580 580 580 580
Avg. delay Variation 8.71 5.46 9.25 5.63 7.71 6.63 4.75 9.63 5.17 9.67
ns
Max Latency Avg Latency Min Latency
Test was done between far ports to test across chip performance/latency
* Brocade VDX 6730-32 was configured in 24 10GbE ports
Video feature: Click to view Brocade Networks video podcast
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
50 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
For the Fall 2011 Lippis/Ixia test we populated and tested the Brocade VDX™ 6730-32’s 10GbE functional-ity, which is a line-rate, low-latency, lossless ToR switch in a 1U form fac-tor. The Brocade VDX™ 6730 series switches are based on sixth-generation custom ASIC fabric switching technol-ogy and runs the feature dense Brocade Network Operating System (NOS).
The VDX™ 6730-32 ToR switch was tested across all 24 ports of 10GbE. Its average latency ranged from a low of 521 ns to a high of 606 ns for layer 2 traffic measured in cut-through mode. The VDX™ 6730-32 demonstrates one of the most consistent latency results we have measured. Its average delay variation ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate, a comfort-ing result giving the VDX™ 6730-32’s design focus of converged network/storage infrastructure.
The Brocade VDX™ 6730-32 demon-strated 100% throughput as a percent-age of line rate across all 24-10GbE ports. In other words, not a single packet was dropped while the Brocade VDX™ 6730-32 was presented with enough traffic to populate its 24-10GbE ports at line rate.
The Brocade VDX™ 6730-32 demon-strated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was over subscribed at 150% of line rate. The Brocade VDX™ 6730-32 did not depict HOL blocking behavior which means that as the 10GbE port became congested, it did not impact the performance of other ports, assuring reliable operation during congestion conditions. As with most ToR switch-
0%
20%
40%
60%
80%
100%
64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2
0
10
20
30
40
50
60
70
80
90
100
64 128 192 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no no nono
L2 Back Pressure yes yes yes yes yes yes yes yes yesyes
L2 Agg Flow Control Frames 6192858 7126424 5278148 5745640 5706859 5366002 5789807 16982513 43176975153639
Brocade VDXTM 6730-32 RFC 2544 L2 Throughput Test
Brocade VDXTM 6730-32 RFC 2889 Congestion Test
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
51 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
es, the Brocade VDX™ 6730-32 did use backpressure as the Ixia test gear de-tected flow control frames.
The Brocade VDX™ 6730-32 performed well under cloud simulation condi-tions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 8 μs measured in cut through mode.
The Brocade VDX™ 6730-32 repre-sents a new breed of cloud network core switches with power efficiency be-ing a core value. Its WattsATIS/port is 1.5Watts and TEER value is 631. Re-member high TEER values a better than low. Note that this is the lowest power consumption that we have ob-served for a 24 port ToR switch. The Brocade VDX™ 6730-32 was equipped with one power supply while others are usually equipped with two. Consider-ing this, the Brocade VDX™ 6730-32 is still one of the lowest power consum-ing 24 port ToR switches on the mar-ket. Note that the 8 native FC ports were not populated and active which may/should consume additional power.
The Brocade VDX™ 6730-32 power cost per 10GbE is calculated at $5.48 per year. The three year cost to power the Brocade VDX™ 6730-32 is estimated at $131.62 and represents less than 0.82% of its list price. Keeping with data cen-ter best practices, its cooling fans flow air front to back.
Brocade VDXTM 6730-32 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 7988
East-West Server_to_Database 705
East-West HTTP 4852
East-West iSCSI-Server_to_Storage 1135
East-West iSCSI-Storage_to_Server 8191
North-South Client_to_Server 1707
North-South Server_to_Client 593
Brocade VDXTM 6730-32 Power Consumption Test
WattsATIS/10GbE port 1.5
3-Year Cost/WattsATIS/10GbE $5.48
Total power cost/3-Year $131.62
3 yr energy cost as a % of list price 0.82%
TEER Value 631
Cooling Front to Back, Reversible
Power Cost/10GbE/Yr $1.83
List price $16,080
Price/port $670.00
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
52 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Virtualization Scale
Within one layer 2 domain, the Bro-cade VDX™ 6730-32 can support ap-proximately hundreds of physical serv-ers and tens of thousands of VMs. This assumes 30 VMs per physical server with each physical server requiring two host IP/ MAC/ ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data center switch one MAC entry, one 32/IP host route and one ARP entry. Clearly the Brocade VDX™ 6730-32 can support a much larger virtual network than the number of its physical ports assuring virtualization scale in layer 2 domains. From a larger VDX perspective, the Brocade VDX™ data center switches can support 1,440 ports and tens of thousand’s of VMs.
Brocade VDXTM 6730-32 Virtual Scale Calculation
/32 IP Host Route Table Size *
MAC Address Table Size 30000
ARP Entry Siize
Assume 30:1 VM:Physical Machine 32
Total number of Physical Servers per L2 domain 938
Number of VMs per L2 domain 29063
* This is L2 device so technically you can have 30K both host IP and ARP size.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
53 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
New Ethernet fabric based architectures are emerging to support highly virtualized data center environments and the building of cloud infrastructures allowing IT organiza-tions to be more agile and customer responsive. Ethernet fabrics will transition the data center networking from the inefficient and cumbersome Spanning Tree Protocol (STP) networks, to high performance, flat, layer 2 networks built for the east-west traffic demanded by server virtualization and highly clustered applications. Rather than STP, the Brocade VDX™ 6730 series utilized a hardware-based Inter-Switch Link (ISL) trunking algorithm, not tested during the Fall Lippis/Ixia test.
The Brocade VDX™ 6730-32 with VCS technology offers 24-wire speed 10GbE ports of layer 2 cut-through forwarding in a 1 RU footprint with 8-8Gbs native Fibre Channel ports. Designed with storage and data traffic convergence in mind, the Brocade VDX™ 6730-32 pro-vides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data with large data-center grade buf-fers to keep traffic moving
This claim was verified in the Lippis/Ixia RFC2544 la-tency test where average latency across all packet sizes was measured between 521 to 606 ns for layer 2 forwarding. Further, there was minimal delay variation, between 4 and 9 ns, across all packet sizes, demonstrating the switch archi-tecture’s ability to forward packets consistently and quickly boosting confidence that the VDX™ 6730-32 will perform well in converged network/storage configurations. In addi-tion, the VDX™ 6730-32 demonstrated 100% throughput at all packet sizes. Redundant power and fans, although not tested here, along with various high availability features en-sure that the VDX™ 6730-32 is always available and reliable for data center operations.
The VDX™ 6730-32 is build on innovative Brocade designed ASIC, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The VDX™ 6730-32 is the next level up from the VDX™ 6720 ToR switches in the VDX family, which can be configured as a single logical chassis and scales to support 60 10GbE and 16-8Gbs native Fibre Channel ports.
A key design center of the VDX™ 6730 is its support for Eth-ernet storage connectivity as it connects to Fibre Channel Storage Area Networks (SANs) in addition to Fibre Channel over Ethernet (FCoE), iSCSI, and NAS storage, providing multiple unified Ethernet storage connectivity options.
Brocade VCS capabilities allow customers to start from small two-switch ToR deployments, such as the VDX™ 6730 tested here, and scale to large virtualization-optimized Eth-
ernet fabrics with tens of thousands of ports overtime. VCS is a soft-ware upgrade feature to the VDX™ 6730. Furthermore, as proven here VCS-based Ethernet fabrics are
convergence ready, allowing customers to deploy one net-work for storage—FCoE, iSCSI, NAS—and IP data traffic.
Brocade VCS technology boasts lower capital cost by col-lapsing access and aggregation layers of a three-tier network into two and thus reducing operating cost through man-aging fewer devices. Customers will be able to seamlessly transition their existing multi-tier hierarchical networks to a much simpler, virtualized, and converged data center network—moving at their own pace, according to Brocade.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
54 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Extreme Networks launched its new entry into the enterprise data center and cloud computing markets with its new line of BlackDiamond® X se-ries switches in May of 2011. This line of Top of Rack and Core switches pushes the envelope on performance, power consumption, density and data center network design. Extreme Networks submitted their new BlackDiamond® X8 (BDX8) core switch into the Fall Lippis/Ixia industry test. The BDX8 is the highest
density 10GbE and 40GbE core switch on the market. It’s capable of sup-porting 768 10GbE and 192 40GbE ports in the smallest packaging foot-print of one third of a rack or only 14.5RU. This high port density core
switch in a small footprint offers new flexibility in building enterprise core networks and cloud data centers. For example, a single BDX8 could support 768 servers connected directly at 10GbE creating a non-blocking flat layer 2 network. But this is but one option in many designs afforded by the BDX8’s attributes.
From an internal switching capacity point of view, the BDX8 offers 1.28 Tbps slot capacity and more than 20 Tbps switching fabric capacity; enough to support the 768 ports of 10 GbE or 192 ports of 40 GbE per chassis. In addition to raw port density and footprint, the BDX8 offers many software features. Some notable software features include automated configuration and provisioning, and XNV™(ExtremeXOS Network Virtualization), which provides automation of VM (Virtual Machine) tracking, reporting and migration of VPPs (Virtual Port Profile). These are but a few software features enabled through ExtremeXOS® network operating system. The Black-Diamond X8 is also key to Extreme Network’s OpenFabric architecture that enables best-in-class choice of switching, compute, storage and virtualization solutions.
Extreme Networks BlackDiamond® X8 Core Switch
Video feature: Click to view Extreme Networks video podcast
Extreme Networks BlackDiamond® X8 Core Switch Test Configuration* Hardware Software Version Port Density
Device under testBSX8 http://www.extremenetworks.com
15.1 352
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.10 EA IxNetwork 6.10 EA
Ixia Line Cards Xcellon Flex AP10G16S 16 port 10G module
Xcellon Flex Combo 10/40GE AP 16 port 10G / 4 port 40G
http://www.ixiacom.com/
Cabling 10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850mm SPF+ transceivers
www.leviton.com
Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010
Siemon Moray Low Power Active Optical Cable Assemblies Single Mode QSFP+ 40GbE optical cable QSFP30-03 7 meters
www.siemon.com
* Extreme BDX8 was configured in 256 10GbE ports and 24-40GbE ports or an equivalent 352-10GbE ports.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
55 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The BDX8 breaks all of our previous records in core switch testing from performance, latency, power consumption, port density and packaging design. In fact, the Extreme Networks BDX8 was between 2 and 9 times faster at forwarding packets over a wide range of frame sizes than previous core switch measurements conducted during Lippis/Ixia iSimCity test of core switches from Alcatel-Lucent, Arista and Juniper. In other words, the BDX8 is the fastest core switch we have tested with the lowest latency measurements. The BDX8 is based upon the latest Broadcom merchant silicon chip set.
For the fall Lippis/Ixia test we populated the Extreme Networks BlackDiamond® X8 with 256 10GbE ports and 24 40GbE ports, thirty three percent of its capacity. This was the highest ca-pacity switch tested during the entire series of Lippis/Ixia cloud network test at iSimCity to date.
We tested and measured the BDX8 in both cut through and store and forward modes in an effort to understand the differ-ence these latency measurements offer. Further, latest merchant silicon forwards packets in store and forward for smaller packets while larger packets are forwarded in cut-through making this new generation of switches hybrid cut-through/store and forward devices. The Extreme Networks BlackDiamond X8 is unique in that its switching architecture forward packets in both cut-through and store-and-forward modes. As such, we measure the BlackDiamond X8 latency with both measure-ment methods. The purpose is not to compare store-and-forward vs cut through, but to demonstrate the low latency results obtained independent upon forwarding method. During store-and-forward testing, the latency of packet serialization delay (based on packet size) is removed from the reported latency number by test equipment. Therefore, to compare cutthrough to store-and-forward latency measurement, packet serialization delay needs to be added to store-and-forward latency number. For example, in a store-and-forward latency number of 800ns for a 1,518 byte size packet, the additional latency of 1240ns (serialization delay of a 1518 byte packet at 10Gbps) is required to be added to the store-and-forward measure-ment. This difference can be significant. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.
The Extreme Networks BDX8 was tested across all 256 ports of 10GbE and 24 ports of 40GbE. Its average cut through latency ranged from a low of 2324 ns or 2.3 μs to a high of 12,755 ns or 12.7 μs at jumbo size 9216 Byte size frames for layer 2 traffic. Its aver-age delay variation ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate; a welcome measurement for converged I/O implementations
Extreme Networks BlackDiamond® X8 Core Switch
Extreme Networks BDX8RFC 2544 Layer 2 Latency Testin Cut-Through Mode
0
20000
40000
60000
80000
100000
120000
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency
Avg Latency
Min Latency
Avg. delay Variation
2510
2324.921
2100
8.314
2657
2442.854
2162
5.403
2862
2579.339
2222
9.179
2997
2673.136
2277
5.386
3542
3137.229
2522
7.636
4642
3919.9
2897
7.146
4877
4169.825
2977
5.632
5277
4396.996
3062
9.314
6402
5087.025
3317
4.986
18262
12755.718
5982
9.09
nsMax Latency
Avg Latency
Min Latency
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
56 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Extreme Networks BDX8 was test-ed across all 256 ports of 10GbE and 24 ports of 40GbE. Its average store and forward latency ranged from a low of 2273 ns or 2.2 μs to a high of 12,009 ns or 12 μs at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay variation ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, Extreme Networks BDX8’s measured average cut through latency ranged from a low of 2,321 ns at 64Bytes to a high of 12,719 ns or 12.7 μs at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate; again a welcome measurement for converged I/O implementations.
Extreme Networks BlackDiamond® X8 Core Switch
Extreme Networks BDX8RFC 2544 Layer 2 Latency Testin Store and Forward Mode
0
20000
40000
60000
80000
100000
120000
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency
Avg Latency
Min Latency
Avg. delay Variation
2482
2273.521
2040
8.314
2642
2351.750
2100
5.403
2797
2436.686
2160
9.179
2910
2493.575
2180
5.386
3442
2767.593
2417
7.636
4547
3331.511
2780
7.146
4990
3588.750
2980
5.632
5430
3830.261
3140
9.314
6762
4542.814
3660
4.986
21317
12095.05
8960
9.079
nsMax Latency
Avg Latency
Min Latency
Extreme Networks BDX8 RFC 2544 Layer 3 Latency Test In Cut-Through Mode
0
20000
40000
60000
80000
100000
120000
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency
Avg Latency
Min Latency
Avg. delay Variation
2550
2321.164
2100
9.982
2722
2440.482
2162
5.454
2902
2577.779
2230
9.2
3162
2668.411
2302
5.411
3770
3132.839
2547
7.593
5102
3898.596
2870
7.164
5430
4162.443
2987
5.529
6010
4406.857
3070
9.321
7290
5126.400
3302
4.982
21970
12719.646
5997
9.118
ns
Max Latency
Avg Latency
Min Latency
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
57 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
For layer 3 traffic, Extreme Networks BDX8’s measured average store and forward latency ranged from a low of 2,273 ns at 64Bytes to a high of 11,985 ns or 11.9 μs at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate; again a welcome measurement for converged I/O implementations.
The above latency measures are the lowest we have measured for core switches by a large degree. The BDX8 forwards packets ten to six times faster than other core switches we’ve tested.
The Extreme Networks BDX8 demon-strated 100% throughput as a percent-age of line rate across all 256-10GbE and 24-40GbE ports. In other words, not a single packet was dropped while the Extreme Networks BDX8 was pre-sented with enough traffic to populate its 256-10GbE and 24-40GbE ports at line rate simultaneously for both L2 and L3 traffic flows measured in both cut-through and store and forward; a first in these Lippis/Ixia test.
Extreme Networks BlackDiamond® X8 Core Switch
Extreme Networks BDX8RFC 2544 Layer 3 Latency Testin Store and Forward Mode
0
20000
40000
60000
80000
100000
120000
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency
Avg Latency
Min Latency
Avg. delay Variation
2542
2273.346
2040
9.982
2717
2353.996
2100
5.454
2937
2432.400
2140
9.2
3117
2492.879
2220
5.411
3657
2766.289
2400
7.593
5062
3324.279
2780
7.164
5537
3581.018
3000
5.529
6130
3826.468
3160
9.321
7677
4504.139
3640
4.982
25070
11985.946
8960
9.118
nsMax Latency
Avg Latency
Min Latency
Extreme Networks BDX8RFC 2544 L2 & L3 Throughput Test
Throughput% Line Rate
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
100%
80%
60%
40%
20%
0%64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 Layer 3
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
58 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Extreme Networks BDX8 demon-strated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. We tested congestion in both 10GbE and 40GbE configurations. A single 10GbE port was flooded at 150% of line rate. In addition, a single 40GbE port was flooded at 150% of line rate; another first in these Lippis/Ixia test.
The Extreme Networks BDX8 did not use HOL blocking which means that as the 10GbE and 40GbE ports on the BDX8 became congested, it did not impact the performance of other ports. Back pressure was detected. The BDX8 did send flow control frames to the Ixia test gear signaling it to slow down the rate of incoming traffic flow.
Priority Flow Control (PFC) was enabled on the BDX8 during conges-tion test, as this is Extreme Network’s direction to run Data Center Bridging protocols on the BDX8 in support of data center fabric requirements. We expect more core switches will en-able and support PFC as this enables Ethernet fabrics to support storage and IP traffic flows under congestion conditions without packet loss, thus maintaining 100% throughput.
The Extreme Networks BDX8 demon-strated 100% aggregated throughput for IP multicast traffic measured in cut through mode with latencies ranging from a 2,382 ns at 64Byte size packet to 3,824 ns at 9216Byte size packets.
Extreme Networks BlackDiamond® X8 Core Switch
0
500
1000
1500
2000
3000
2500
4000
3500
0102030405060708090
100
64 128 192 256 512 1024 1280 1518 92162176
Agg Tput Rate (%)
Agg. Aver. Latency
100
2382.767
100
2448.613
100
2519.792
100
2578.477
100
2835.100
100
3117.849
100
3143.043
100
3161.283
100
3824.054
100
3217.358
Agg. Tput Rate (%) Agg. Average Latency
Extreme Networks BDX8RFC 3918 IP Multicast Test Cut Through Mode
Agg Average Latency (ns)
Agg TputRate (%)
no
no
no
no
no
no
no
no
no
no
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
11305858
9679780
9470606
29566584
23892956
22794016
24439074
21749118
20897472
12938948
11303930
9682464
9466194
10193602
8909568
8915370
7944288
9093160
9101282
8977442
100
100
100
100
100
100
100
100
100
100
64
128
192
256
512
1024
1280
1518
2176
9216
150% of Line Rate into a single 40GbE
Frame (Bytes) Head of Line Blocking
Back Pressure
Agg ForwardingRate (% Line Rate)
Agg Flow Control Frames
Extreme Networks BDX8 RFC 2889 Congestion Test150% of Line Rate into a single 10GbE
LAYER 2 LAYER 3
no
no
no
no
no
no
no
no
no
no
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
8641332
5460684
6726918
7294218
8700844
5887130
5076130
4841474
5268752
5311742
9016074
5459398
6399598
5798720
5081322
5342640
5076140
4840594
5268552
5219964
100
100
100
100
100
100
100
100
100
100
64
128
192
256
512
1024
1280
1518
2176
9216
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
59 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Extreme Networks BDX8 per-formed very well under cloud simula-tion conditions by delivering 100% ag-gregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 4.6 μs and 4.8 μs measured in cut through and store and forward modes respectively. This measurement too breaks all previous records as the BDX8 is between 2 and 10 times faster in forwarding cloud based protocols under load.
The Extreme Networks BDX8 repre-sents a new breed of cloud network core switches with power efficiency being a core value. Its WattsATIS/port is 8.1 and TEER value is 117. To put power measurement into perspective, other core switches are between 30% and 2.6 times less power efficient. In other words, the BDX8 is the most power efficient core switch tested to date in the Lippis/Ixia iSimCity lab.
Note that the total number of BDX8 10GbE ports to calculate Watts/10GbE were 352, as 24-40Gbs ports or 96 10GbE ports were added to the 256-10GbE ports for Watts/ATIS/port cal-culation. The 40GbE ports are essen-tially four 10GbE ports from a power consumption point of view.
While these are the lowest Watts/10GbE port and highest TEER values observed for core switches, the Extreme Networks BDX8’s actual Watts/10GbE port is actually lower; we estimate approximately 5 Watts/10GbE
port when fully populated with 756 10GbE or 192 40GbE ports. During the Lippis/Ixia test, the BDX8 was only populated to a third of its port capac-ity but equipped with power supplies, fans, management and switch fabric modules for full port density popula-tion. Therefore, when this full power capacity is divided across a fully popu-lated BDX8, its WattsATIS per 10GbE Port will be lower than the measure-ment observed.
Extreme Networks BlackDiamond® X8 Core Switch
The Extreme Networks BDX8 power cost per 10GbE is calculated at $9.87 per year. The three year cost to power the Extreme Networks BDX8 is esti-mated at $10,424.05 and represents less than 1.7% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
Traffic Direction Traffic Type Cut-ThroughAvg Latency (ns)
Database_to_Server
Server_to_Database
HTTP
iSCSI-Server_to_Storage
iSCSI-Storage_to_Server
Client_to_Server
Server_to_Client
2574
1413
4662
1945
3112
3426
3576
3773
1339
4887
2097
3135
3368
3808
Extreme Networks BDX8 Cloud Simulation Test
East-West
East-West
East-West
East-West
East-West
North-South
North-South
Store and ForwardAvg Latency (ns)
Extreme Networks BDX8 Power Consumption Test
WattsATIS/10GbE port
3-Year Cost/WattsATIS/10GbE
Total power cost/3-Year
3 yr energy cost as a % of list price
TEER Value
Cooling
8.1
$29.61
$10,424.05
1.79%
117
Front to Back, Reversible
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
60 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Virtualization Scale Calculation
Within one layer 2 domain, the Extreme Networks BlackDiamond® X8 can support approximately 900 physical servers and 27,125 VMs, thanks to its support of 28K IP host route table size and 128K MAC address table size. This assumes 30 VMs per physical server with each physical server requiring two-host IP/ MAC/ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data center switch one MAC entry, one 32/IP host route and one ARP en-try. In fact, its IP host route and MAC address table sizes are larger than the number of physical ports that it can support at this VM to physical server ratio.
From a deployment mode very large deployments are enabled. For example, assume 40 VMs/server and with a 28K host route table, 700 servers can be connected into a single BDX8 in an End of Row (EOR) configuration. Here the BD-X8 can serve as the routed boundary for the entire row accom-modating 28,000 VMs and approximately 700 physical servers. Scaling further, multiple BDX8s each placed in EOR could be connected to a core routed BDX8 that serves as the Layer 3 interconnect for the EOR BDX8s. Since each EOR BDX8 is a Layer 3 domain, the core BDX8 does not see the individ-ual IP host table entries from each VM (which are hidden behind the EOR BDX8). A very large 2-tier network with BDX8s can be created with the first tier being the EOR BDX8s which is routing, while the second tier are core BDX8(s) which is also routing, thus accommodate hundreds of thousands of VMs. The BDX8 can scale very large in its support of virtualized data center environments.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
61 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
The Extreme Networks BDX8 seeks to offer high performance, low latency, high port density of 10 and 40GbE and low power consumption for private and public cloud networks. This architecture proved its value as it delivered the lowest latency measurements to date for core switches while populated with the equivalent of 352 10GbE ports running traffic at line rate. Not a single packet was dropped offering 100% throughput a line rate for the equivalent of 352 10GbE ports. Even at 8 Watts/10GbE port, the Extreme Networks BDX8 consumes the least amount of power of core switches measured to date at the Lippis/Ixia industry test at iSimCity. Obser-vations of the BDX8’s IP Multicast and congestion test performance were the best measured to date in terms of latency and throughput under congestion conditions in both 10GbE and 40GbE scenarios.
The Extreme Networks BDX8 was found to have low power consumption, front-to-back cooling, front-accessible components and an ultra compact form factor. The Extreme Networks BDX8 is designed to meet the requirements for private and public cloud networks. Based upon these Lippis/Ixia test, it achieves its design goals.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
62 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Extreme Networks launched its new entry into the en-terprise data center and cloud computing markets with its new line of Summit® X series of switches in May of
2011. This line of Top of Rack switches pushes the envelope on performance, power consumption, density and data center network design.
The Summit® X670 series are purpose-built Top-of-Rack (ToR) switches designed to support emerging 10 GbE enabled servers in enterprise and cloud data centers. The Summit X670 is ready for 40GbE uplinks thanks to its optional 40 GbE support. In addition the Summit X670 provides for both 1 and 10GbE physical server connections to ease transition to high-performance server connections while providing unique features for virtualized data center environments. The ExtremeXOS® operating system, with its high availability attributes provides simplicity and ease of operation through the use of one OS everywhere in the network.
Extreme Networks submitted their new Summit® X670V ToR switch into the Fall Lippis/Ixia industry test. The X670V is a member of the Summit® X family of ToR switches. The X670V is capable of supporting 48-10GbE and 4-40GbE ports in a small 1RU footprint. The X670V in com-bination with Extreme Networks BlackDiamond® X8 offer new flexibility in building enterprise and cloud spec data center networks with the X670V providing server connectivity at 1 and 10GbE while connecting into the BDX8 at either 10 or 40GbE under a single OS image in both devices.
Extreme Networks Summit® X670V Top-of-Rack Switch
Extreme Networks Summit® X670V Test Configuration* Hardware Software Version Port Density
Device under testX670V http://www.extremenetworks.com
12.6 64
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.10 EA SP2 IxNetwork 6.10 EA
Ixia Line Cards Xcellon Flex AP10G16S 16 port 10G module
Xcellon Flex Combo 10/40GE AP 16 port 10G / 4 port 40G
http://www.ixiacom.com/
Cabling 10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850mm SPF+ transceivers
www.leviton.com
Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010
Siemon Moray Low Power Active Optical Cable Assemblies Single Mode QSFP+ 40GbE optical cable QSFP30-03 7 meters
www.siemon.com
* Extreme X670V was configured in 48 10GbE ports and 4-40GbE ports or an equivalent 64-10GbE ports.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
63 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
For the fall Lippis/Ixia test we popu-lated and tested the Extreme Networks Summit® X670V with 48-10GbE ports and 4-40GbE ports; its full capacity. Its average latency ranged from a low of 900 ns to a high of 1,415 ns for layer 2 traffic. Its average delay variation ranged between 7 and 10 ns, provid-ing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, the Extreme Net-works X670V’s measured average latency ranged from a low of 907 ns at 64Bytes to a high of 1,375 ns. Its aver-age delay variation for layer 3 traffic ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate.
Extreme Networks Summit® X670V Top-of-Rack Switch
Extreme Networks Summit® X670VRFC 2544 Layer 2 Latency Testin Cut-Through Mode
0
500
1000
1500
2000
64 128 192 256 512 1024 1280 1518 2176 9216
Max Latency
Avg Latency
Min Latency
Avg. delay Variation
1020
900.192.
737
9.346
1040
915.519
742
7.962
1200
1056.519
762
10.25
1200
1067.115
782
6.942
1360
1208.731
782
7.865
1580
1412.808
802
9.712
1560
1415.962
817
7.827
1560
1415.327
817
10.192
1580
1411.923
817
7.962
1580
1410.019
802
10.731
ns
Max Latency
Avg Latency
Min Latency
Extreme Networks Summit® X670VRFC 2544 Layer 3 Latency Testin Cut-Through Mode
064 128 192 256 512 1024 1280 1518 2176 9216
Max Latency
Avg Latency
Min Latency
Avg. delay Variation
1040
907.923
762
8.135
1060
943.135
747
5.423
1160
1026.615
790
8.942
1180
1029.731
770
5.558
1320
1176.923
810
7.019
1520
1375.019
810
6.885
1540
1373.865
847
4.558
1540
1374.077
790
9.346
1540
1372.712
810
5.058
1520
1370.327
810
9.712
ns
500
1000
1500
2000
Max Latency
Avg Latency
Min Latency
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
64 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Extreme Networks X670V demon-strated 100% throughput as a percent-age of line rate across all 48-10GbE and 4-40GbE ports. In other words, not a single packet was dropped while the Extreme Networks X670V was pre-sented with enough traffic to populate its 48-10GbE and 4-40GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
Two congestion test were conducted using the same methodology. A 10GbE and 40GbE congestion test stressed the X670V ToR switch’s congestion management attributes for both 10GbE and 40GbE. The Extreme Networks X670V demonstrated 100% of aggre-gated forwarding rate as percentage of line rate during congestion conditions for both the 10GbE and 40GbE test. A single 10GbE port was flooded at 150% of line rate. In addition, a single 40GbE port was flooded at 150% of line rate.
The Extreme Networks X670V did not use HOL blocking which means that as the 10GbE and 40GbE ports on the X670V became congested, it did not impact the performance of other ports. Back pressure was detected. The X670V did send flow control frames to the Ixia test gear signaling it to slow down the rate of incoming traffic flow, which is common in ToR switches.
Extreme Networks Summit® X670V Top-of-Rack Switch
Extreme Networks Summit® X670V RFC 2544 L2 & L3 Throughput Test
Throughput% Line Rate
Layer 2 100 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100 100
100%
80%
60%
40%
20%
0%64 128 192 256 512 1024 1280 1518 2176 9216
Layer 2 Layer 3
no
no
no
no
no
no
no
no
no
no
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
10265682
7951820
9938718
28243060
26799154
31017876
25762198
28271714
27102102
22865370
10273162
9517872
10655066
9508906
9534112
7900572
9482604
9345090
9294636
9521042
100
100
100
100
100
100
100
100
100
100
64
128
192
256
512
1024
1280
1518
2176
9216
150% of Line Rate into a single 40GbE
Frame (Bytes) Head of Line Blocking
Back Pressure
Agg ForwardingRate (% Line Rate)
Agg Flow Control Frames
Extreme Networks Summit® X670V Congestion Test150% of Line Rate into a single 10GbE
LAYER 2 LAYER 3
no
no
no
no
no
no
no
no
no
no
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
6976188
4921806
6453360
4438204
3033068
3125294
17307786
13933834
13393656
9994308
6641074
5720526
6751626
6167986
5425000
5668574
5540616
5320560
5339310
5042648
100
100
100
100
100
100
100
100
100
100
64
128
192
256
512
1024
1280
1518
2176
9216
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
65 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Extreme Networks X670V dem-onstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 955 ns at 64Byte size packet to 1009 ns at 9216Byte size packets.
The Extreme Networks X670V per-formed well under cloud simulation conditions by delivering 100% ag-gregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 3.1 μs measured in cut through mode.
The Extreme Networks X670V rep-resents a new breed of cloud network ToR switches with power efficiency being a core value. Its WattsATIS/port is 3.2 and TEER value is 295. Note that the total number of X670V 10GbE ports to calculate Watts/10GbE were 64, as 4-40Gbs ports or 16 10GbE ports were added to the 48-10GbE ports for Watts/ATIS/port calculation. The 40GbE ports are essentially four 10GbE ports from a power consump-tion point of view.
The Extreme Networks X670V power cost per 10GbE is calculated at $11.74 per 3-year period. The three-year cost to power the Extreme Networks X670V is estimated at $751.09 and represents less than 2.2% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
Extreme Networks Summit® X670V Top-of-Rack Switch
Traffic Direction Traffic Type
Database_to_Server
Server_to_Database
HTTP
iSCSI-Server_to_Storage
iSCSI-Storage_to_Server
Client_to_Server
Server_to_Client
2568
1354
2733
1911
3110
1206
1001
Extreme Networks Summit® X670V Cloud Simulation Test
East-West
East-West
East-West
East-West
East-West
North-South
North-South
Cut-through Avg Latency (ns)
0
500
1000
1500
2000
3000
2500
4000
3500
0102030405060708090
100
64 128 192 256 512 1024 1280 1518 92162176
Agg Tput Rate (%)
Agg. Aver. Latency
100
955.824
100
968.431
100
987.51
100
985.98
100
998.157
100
1011
100
1012.529
100
1010.157
100
1009.863
100
1009.824
Agg. Tput Rate (%) Agg. Average Latency
Agg Tput Rate (%)
Extreme Networks Summit® X670VRFC 3918 IP Multicast Test Cut Through Mode
Agg Average Latency (ns)
Extreme Networks Summit® X670V Power Consumption Test
WattsATIS/10GbE port
3-Year Cost/WattsATIS/10GbE
Total power cost/3-Year
3 yr energy cost as a % of list price
TEER Value
Cooling
3.21
$11.74
$751.09
2.21%
295
Front to Back, Reversible
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
66 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Extreme Networks X670V seeks to offer a range of server and ToR connectivity options to IT architects as they design their data centers and cloud network facilities. The X670V is competitive with other ToR switches from a price, performance, latency and power consumption point of view. The X670V when combined with the X8 offers a powerful two tier cloud network architecture as they bring together the performance advantages observed during the Lippis/Ixia test and software features found is ExtremeXOS® network operating system.
The Extreme Networks X670V was found to have low power consumption, front-to-back cooling, front-accessible components and a 1RU compact form factor. The Extreme Networks X670V is designed to meet the requirements for private and public cloud networks. Based upon these Lippis/Ixia test, it achieves its design goals.
Discussion:
Virtualization Scale Calculation
Within one layer 2 domain, the Extreme Networks Summit® X670V can support approximately 188 physical servers and 5813 VMs, thanks to its 6K IP Host route table size and 128K MAC address table size. This assumes 30 VMs per physical server with each physical server requiring two host IP/ MAC/ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will re-quire from a data center switch one MAC entry, one 32/IP host route and one ARP entry. Clearly the Extreme Networks Summit® X670V can support a much larger logical network than its physical port capability.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
67 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Dell/Force10 S-Series S4810
The Dell/Force10 S-Series S4810 is a low-latency (nanosecond scale) 10/40 GbE ToR/leaf switch purpose-built for applications in high-performance data center and computing environments. The compact S4810 design provides port density of 48 dual-speed 1/10 GbE (SFP+) ports as well as four 40 GbE QSFP+ uplinks to conserve valuable rack space and simplify the migration to 40 Gbps in the data center core. For the Lippis/Ixia test, the S4810 was con-figured as a 48-wire speed 10 GbE ToR switch.
The Dell/Force10 S4810 ToR switch was tested across all 48 ports of 10GbE. Its average latency ranged from a low of 799 ns to a high of 885 ns for layer 2 traffic. Its average delay variation ranged between 5 and 9.2 ns, providing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, the Dell/Force10 S4810’s average latency ranged from a low of 796 ns to a high of 868 ns across all frame sizes. Its average delay varia-tion for layer 3 traffic ranged between 4.6 and 9.1 ns, providing consistent la-tency across all packet sizes at full line rate.
Dell/Force10 Networks S4810 Test ConfigurationHardware Software Version Port Density
Device under testS4810 http://www.force10networks.com
8.3.7.0E4 48
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Dell/Force10 Networks S4810 RFC2544 Layer 2 Latency Test
Dell/Force10 Networks S4810 RFC2544 Layer 3 Latency Test
600
700
800
900
1000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 940 940 940 880 900 880 880 880 860
Avg Latency 862.417 868.542 863.771 822.625 843.021 817.833 802.375 809.708 796.063
Min Latency 740 780 680 720 740 700 700 700 720
Avg. delay Variation 8.458 5.25 5.521 7.688 6.688 4.646 9.063 5.083 9.146
nsMax Latency Avg Latency Min Latency
600
700
800
900
1000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 940 940 920 880 900 880 880 880 860
Avg Latency 874.479 885 867.604 825.292 846.521 819.896 805.063 812.063 799.792
Min Latency 760 780 740 720 780 720 720 720 720
Avg. delay Variation 9.104 5.229 5.458 7.938 6.729 6 8.979 5 9.271
nsMax Latency Avg Latency Min Latency
Video feature: Click to view Dell/Force10 video podcast
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
68 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Dell/Force10 S4810 demonstrated 100% throughput as a percentage of line rate across all 48-10GbE ports. In other words, not a single packet was dropped while the Dell/Force10 S4810 was presented with enough traffic to populate its 48 10GbE ports at line rate simultaneously for both L2 and L3 traf-fic flows.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0
20
40
60
80
100
64 128 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
Head of Line Blocking no no no no no no no no noBack Pressure yes yes yes yes yes yes yes yes yes
L2 Agg Flow Control Frames 5761872 3612428 3841000 3073936 3158216 2958772 2876596 2730684 2681572
L3 Agg Flow Control Frames 5762860 3614948 3851964 3070936 3155552 2957128 2875592 2730528 2677416
Dell/Force10 Networks S4810 RFC 2544 L2 & L3 Throughput Test
Dell/Force10 Networks S4810 RFC 2889 Congestion Test
The Dell/Force10 S4810 demonstrated 100% of aggregated forwarding rate as percentage of line rate during con-gestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. The Dell/Force10 S4810 did not use HOL block-ing, which means that as the Dell/Force10 S4810-10GbE port became congested, it did not impact the perfor-mance of other ports. As with most ToR switches, the Dell/Force10 S4810 did use back pressure as the Ixia test gear detected flow control frames.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
69 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Dell/Force10 S4810 demonstrated 100% aggregated throughput for IP multicast traffic with latencies rang-ing from a 984 ns at 64Byte size packet to 1442 ns at 1024 and 1280 Byte size packets. The Dell/Force10 S4810 was configured in cut-through mode dur-ing this test.
0.80
0.90
1.00
1.10
1.20
1.30
1.40
1.50
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100
Agg. Average Latency 0.984 1.033 1.062 1.232 1.442 1.442 1.429 1.438 1.437
µsAgg Tput Rate (%)
Dell/Force10 Networks S4810 RFC 3918 IP Multicast Test
The Dell/Force10 S4810 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its la-tency stayed under 6900 ns. The Dell/Force10 S4810 was configured in cut-through mode during this test.
The Dell/Force10 S4810 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 4.03 and TEER value is 235. Note higher TEER values are more desirable as they rep-resent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $4.91 per year. The three-year cost to power the Dell/Force10 S4810 is estimated at $707.22 and represents approximately 3% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
Dell/Force10 Networks S4810 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 6890
East-West Server_to_Database 1010
East-West HTTP 4027
East-West iSCSI-Server_to_Storage 771
East-West iSCSI-Storage_to_Server 6435
North-South Client_to_Server 1801
North-South Server_to_Client 334
Dell/Force10 Networks S4810 Power Consumption Test
WattsATIS/10GbE port 4.03
3-Year Cost/WattsATIS/10GbE $14.73
Total power cost/3-Year $707.22
3 yr energy cost as a % of list price 2.83%
TEER Value 235
Cooling Front to Back
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
70 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
Leveraging a non-blocking, cut-through switching archi-tecture, the Dell/Force10 Networks S4810 delivers line-rate L2 and L3 forwarding capacity with nanosecond latency to maximize network performance. These claims were substantiated during the Lippis/Ixia RFC 2544 latency test, where the S4810 demonstrated average latency of 799 to 874 ns across all packet sizes for layer 2 forwarding and 796 to 862 ns across all packet sizes for layer 3 forwarding. The S4810’s architecture demon-strated its ability to forward packets in the nanosecond range consistently with little delay varia-tion. In fact, the S4810 demonstrated between 5 and 9 ns of delay variation across all packet sizes for both layer 2 and layer 3 forwarding. The S4810 demonstrated 100% through-put at line rate for all packet sizes ranging from 64 Bytes to 9216 Bytes.
In addition to the above verified and tested attributes, Dell/Force10 Networks claims that the S4810 is designed with powerful QoS features, coupled with Data Center Bridging (DCB) support via a future software enhancement, making the S4810 ideally suited for iSCSI storage environments. In addition, the S4810 incorporates multiple architectural fea-tures that optimize data center network flexibility, efficiency
and availability, including Dell/Force10’s VirtualScale stacking technology,
reversible front-to-back or back-to-front airflow for hot/cold aisle environments, and redundant,
hot-swappable power supplies and fans.
The S4810 also supports Dell/Force10’s Open Automation Framework, which provides advanced network automation and virtualization capabilities for virtual data center envi-ronments. The Open Automation Framework is comprised of a suite of inter-related network management tools that can be used together or independently to provide a network that is flexible, available and manageable while reducing operational expenses.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
71 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Apresia 15000 series was developed under the new BoxCore concept for improving network efficiency by using box switches as Core switches to fulfill customers’ satisfaction. The Apresia 15000-64XL-PSR, referred to here as the 15K, is a 64-port wire speed 10GbE Core switch. As the Apresia 15K is a 64-port 10GbE switch, it could be deployed in ToR, End-of-Row or as a Core/Spine switch. For the Lippis/Ixia evaluation, Hitachi Cable requested that the Apre-sia 15K be grouped with Core switches. For the Lippis/Ixia evaluation, Hitachi Cable requested that the Apresia 15K be grouped with Core switches, but is more appropriate for the ToR category.
The Apresia 15K switch was tested across all 64 ports of 10GbE. Note that during testing, the Apresia 15K did not maintain VLAN through the switch at 64Byte size packets, which is used to generate a packet signature to measure latency. Therefore, there is no 64Byte la-tency test data available. Hitachi claims they have fixed the issue after the test. Further, the Apresia 15K’s largest frame size supported in 9044 Bytes, therefore, precluding 9216 Byte size testing.
Hitachi Cable Apresia 15000-64XL-PSR
Apresia 15000-64XL-PSR Test ConfigurationHardware Software Version Port Density
Device under test15000-64XL-PSR http://www.apresia.jp/en/
AEOS 8.09.01 64
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Apresia 15000-64XL-PSR RFC2544 Layer 2 Latency Test
Apresia 15000-64XL-PSR RFC2544 Layer 3 Latency Test
600
700
800
900
1000
1100
128 256 512 1024 1280 1518 2176Max Latency 1020 1020 980 1000 980 960 980Avg Latency 967.172 941.078 910.844 933.047 904.844 889.547 897.328Min Latency 860 780 720 740 740 740 760Avg delay Variation 5.234 5.609 7.719 6.641 6.047 9.234 5.016
ns
Max Latency Avg Latency Min Latency
600
700
800
900
1000
1100
128 256 512 1024 1280 1518 2176Max Latency 1020 1080 1040 1060 1040 1020 1020Avg Latency 940.672 979.953 950.063 973.547 944.641 932.516 940.344Min Latency 860 840 840 800 800 800 820Avg Latency Delay 7.984 7.063 8 9.453 7 10.031 8.031
ns
Max Latency Avg Latency Min Latency
Video feature: Click to view Apresia video podcast
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
72 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Latency data is measured for packet sizes between 128 and 9044. Within this packet range, the Apresia 15K’s average latency was a low of 900 ns to a high of 979 ns for layer 2 traffic. Its average delay variation ranged between 7 and 10.5 ns, providing consistent latency across these packet sizes at full line rate.
For layer 3 traffic, the Apresia 15K’s average latency ranged from a low of 889 ns to a high of 967 ns across 128-9044 Bytes size frames. Its average delay varia-tion for layer 3 traffic ranged between 5 and 9.6 ns, providing consistent latency across these packet sizes at full line rate.
The Apresia 15K demonstrated nearly 100% throughput as a percentage of line rate across all 64-10GbE ports for layer 2 traffic and 100% throughput for layer 3 flows. In other words, there was some packet loss during the RFC 2544 Layer 2 test, but for Layer 2 not a single packet was dropped while the Apresia 15K was presented with enough traffic to populate its 64 10GbE ports at line rate simultaneously.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176
Layer 2 100 97.288 98.541 99.238 99.607 99.682 99.73 99.808
Layer 3 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176
Layer2 Agg. Forwarding Rate 77.854 79.095 78.624 78.581 79.029 78.974 78.609 49.946
Layer 3 Agg. Forwarding Rate 77.913 77.911 77.785 78.229 67.881 50.071 50.029 50.03
% Line RateLayer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
L2 Head of Line Blocking no no no no no no no yes
L2 Back Pressure yes yes yes yes yes yes yes no
Agg Flow Control Frames 0 0 0 0 0 0 0 0
L3 Head of Line Blocking no no no no yes yes yes yes
L3 Back Pressure yes yes yes yes no no no no
Apresia 15000-64XL-PSR RFC 2544 L2 & L3 Throughput Test
Apresia 15000-64XL-PSR RFC 2889 Congestion Test
The Apresia 15K demonstrated mixed results for the congestion test. Between 64 and 1518 Byte size packets, the 15K demonstrated nearly 80% of aggre-gated forwarding rate as percentage of line rate during congestion condi-tions for L2 traffic flows. At 2176 Byte size packets, the aggregated forwarding rate dropped to nearly 50% with HOL blocking present.
For L3 traffic flows at 64 and 1512 Byte size packets, the 15K demonstrated nearly 80% of aggregated forwarding rate as percentage of line rate during congestion conditions. At packet sizes 1024 to 2176, HOL blocking was present as the aggregated forwarding rate as per-centage of line rate during congestion conditions dropped to 67% and 50%.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
73 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Apresia 15000-64XL-PSR RFC 3918 IP Multicast Test
The Apresia 15K does not support IP Multicast.
The Apresia 15K delivered 100% ag-gregated throughput while processing a large combination of east-west and north-south traffic flows during the cloud computing simulation test. Zero packet loss was observed as its latency stayed under 2183 ns.
The Apresia 15K is a new breed of cloud network ToR or leaf switch with power efficiency being a core value. Its WattsA-
TIS/port is 3.58 and TEER value is 264. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consump-tion. Hitachi Cable was unable to pro-vide list pricing information thus there is no data on the Apresia 15K’s power cost per 10GbE, three-year cost of pow-er and estimate of three-year energy cost as a percentage of list price. Keep-ing with data center best practices, its cooling fans flow air front to back and states that Back-to-Front cooling will be available at some point in the future.
Apresia 15K Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 1698
East-West Server_to_Database 1200
East-West HTTP 1573
East-West iSCSI-Server_to_Storage 1148
East-West iSCSI-Storage_to_Server 1634
North-South Client_to_Server 2183
North-South Server_to_Client 1021
Apresia 15K Power Consumption Test
WattsATIS/10GbE port 3.58
3-Year Cost/WattsATIS/10GbE $13.09
Total power cost/3-Year $837.67
TEER Value 264
Cooling Front to Back
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
74 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion: The Apresia 15000-64XL-PSR, referred to here as the 15K
The Apresia 15000-64XL-PSR (hereinafter called 64XL) is a terabit box switch that implements 2 40G uplink ports, and 64 1G/10G SFP/SFP+ ports in a 2U size unit. The Apresia15000-32XL-PSR (hereinafter called 32XL) is half the size of 64XL and implements 2 40G uplink ports, and 32 1G/10G SFP/SFP+ ports in a 1U size unit. Switching capaci-ties are 1.28 terabits/second for the 64XL, and 640 gigabits/second for the 32XL.
When set to equal capacities, the 64XL and the 32XL are not only space-saving, but also achieve significant cost-savings when compared to the conventional chassis switches. Both the 64XL and 32XL are designed to be used as data cen-ter switches, or network Core switches and broadband L2 switches for enterprises and academic institutions. Planned functions specific to data center switches that play an important role in cloud computing include FCoE, storage
I/O consolidation technology, and DCB, a new Ether-net technical standard to realize FCoE, and other
functions that support virtual server environ-ments. A data center license and L3 license are optional and can be purchased in accordance with the intended use. Without such licenses, both the 64XL and 32XL perform as high-end L2 switches.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
75 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Juniper Network EX Series EX8200 Ethernet Switch, EX8216 Ethernet Switch
Juniper Networks EX8216 Test ConfigurationHardware Software Version Port Density
Device under testEX8216, EX8200-8XS, EX8200-40XS http://www.juniper.net/us/en/
JUNOS 10.3R2.11 128
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Juniper Networks EX8216 RFC2544 Layer 2 Latency Test
Juniper Networks EX8216 RFC2544 Layer 3 Latency Test
0
5000
10000
15000
20000
25000
30000
35000
40000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 14120 13820 13500 14540 15840 16540 17020 18660 35500
Avg Latency 12390.375 12112.5 11814.047 12685.406 13713.25 14318.484 14664.844 15767.438 28674.078
Min Latency 10320 10460 10780 11420 12420 12720 13160 14280 26220
Avg. delay Variation 9 5 5.109 8 7 5 9.563 5 10
nsMax Latency
Avg Latency
Min Latency
0
10000
20000
30000
40000
50000
60000
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 15520 14560 14540 15780 17760 19180 19820 22380 50340
Avg Latency 11891.617 11366.211 11439.586 11966.75 13322.914 14131.773 14789.578 16302.711 34722.688
Min Latency 8860 9100 9500 10380 11800 12480 12980 14720 32100
Avg. delay Variation 9 5 5.375 7.945 7 5.141 9.516 5 9.938
nsMax Latency
Avg Latency
Min Latency
The 16-slot EX8216 Ethernet Switch, part of the EX8200 line of Ethernet Switches from Juniper Networks®, offers a high-density, high-performance plat-form for aggregating access switches de-ployed in data center ToR or end-of-row applications, as well as for supporting Gigabit Ethernet and 10 Gigabit Ether-net server access in data center end-of-row deployments. During the Lippis/Ixia test, the EX8216 was populated with 128 10GbE ports, classifying it as a Core/spine switch for private or public data center cloud implementations.
The Juniper EX8216 was tested across all 128 ports of 10GbE. Its average la-tency ranged from a low of 11366 ns or 11.3 µs to a high of 34,723 ns or 34 µs at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay varia-tion was ranged between 5 and 9.9 ns, providing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, the Juniper EX8216’s measured average latency ranged from a low of 11,814 ns to 35,500 ns or 35µs at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate.
Video feature: Click to view Juniper video podcast
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
76 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Juniper EX8216 demonstrated 100% throughput as a percentage of line rate across all 128-10GbE ports. In other words, not a single packet was dropped while the Juniper EX8216 was presented with enough traffic to popu-late its 128 10GbE ports at line rate si-multaneously for both L2 and L3 traffic flows.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 77.005 75.009 75.016 75.031 75.06 75.06 75.124 75.06 75.06
Layer 3 Agg. Forwarding Rate 77.814 77.835 77.825 74.196 72.784 77.866 77.691 77.91 78.157
% Line RateLayer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
Head of Line Blocking no no no no no no no no noBack Pressure no no no no no no no no noAgg Flow Control Frames 0 0 0 0 0 0 0 0 0
Juniper Networks EX8216 RFC 2544 L2 & L3 Throughput Test
Juniper Networks EX8216 RFC 2889 Congestion Test
The Juniper EX8216 demonstrated nearly 80% of aggregated forward-ing rate as percentage of line rate dur-ing congestion conditions for L2 and L3 forwarding. A single 10GbE port was flooded at 150% of line rate. The EX8216 did not use HOL blocking, which means that as the 10GbE port on the EX8216 became congested, it did not impact the performance of other ports. There was no back pressure de-tected, and the Ixia test gear did not re-ceive flow control frames.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
77 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Juniper EX8216 demonstrated 100% and at times 99.99% aggregated throughput for IP multicast traffic with latencies ranging from a 25,135 ns or 25 µs to 60,055 ns or 60 µs at 9216Byte size packets.
20
30
40
50
60
70
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Agg Tput Rate (%) 100 100 100 100 100 100 100 100 100
Agg. Average Latency 27.2 26.2 25.1 26.6 28.8 30.0 30.7 33.0 60.1
µsAgg Tput Rate (%)
Juniper Networks EX8216 RFC 3918 IP Multicast Test
The Juniper EX8216 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 30µs. Of special note is the consistency that the Juniper EX8216 performed while processing a heavy combination of east-west and north-south flows.
The Juniper EX8216 represents a new breed of cloud network spine switches with power efficiency being a core val-ue. Its WattsATIS/port is 21.68 and TEER value is 44. Its power cost per 10GbE is estimated at $26.42 per year. The three-year cost to power the EX8216 is estimated at $10,145.60 and represents less than 2% of its list price. The Juniper EX8216 cooling fans flow side-to-side.
Juniper Networks EX8216 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 30263
East-West Server_to_Database 11614
East-West HTTP 18833
East-West iSCSI-Server_to_Storage 19177
East-West iSCSI-Storage_to_Server 26927
North-South Client_to_Server 12145
North-South Server_to_Client 11322
Juniper Networks EX8216 Power Consumption Test
WattsATIS/10GbE port 21.68
3-Year Cost/WattsATIS/10GbE $79.26
Total power cost/3-Year $10,145.60
3 yr energy cost as a % of list price 1.30%
TEER Value 44
Cooling Side-to-Side
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
78 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion:
The EX8216 delivers approximately 1.9 billion packets per second (Bpps) of high-density, wire-speed 10GbE per-formance for the largest data center networks and includes an advanced set of hardware features enabled by the Juniper-designed EX-PFE2 ASICs. Working with the carrier-class Junos operating system, which runs on all EX Series Ethernet switches, the EX-PFE2 ASICs on each EX8200 line card deliver the scalability to support high-performance data center networks. Two line cards were used in Lippis/Ixia test, the EX8200-8XS and EX8200-40XS described below.
EX8200-8XS Ethernet Line Card
The EX8200-8XS is an 8-port 10GBASE-X line card with compact, modular SFP+ fiber optic interfaces, enabling up to 128 line-rate 10-Gigabit Ethernet ports in an EX8216 modular switch chassis. The EX8200-8XS is ideal for enter-prise applications such as campus or data center uplink ag-gregation, core and backbone interconnects, and for service provider deployments requiring high-density, wire-speed 10GbE interconnects in metro area networks, Internet exchange points and points of pres-ence (POPs).
The 10GbE port densities afforded by the EX8200-8XS line cards also enable EX8200 switches to consolidate aggregation and core layers in the data center, simplifying the network architecture and reducing power, space and cooling requirements while lowering total cost of ownership (TCO).
EX8200-40XS Ethernet Line Card
The EX8200-40XS is a 40-port oversubscribed 10GbE solution for data center end-of-row and middle-of-row server access, as well as for data center blade switch and top-of-rack or campus uplink aggregation deployments. The EX8200-40XS supports a wide range of both SFP (GbE) and SFP+ (10 GbE) modular optical interfaces for connect-ing over multimode fiber, single-mode fiber, and copper cabling.
The 40 SFP/SFP+ ports on the EX8200-40XS are divided into 8 independent groups of 5 ports each. Because port groups are independent of one another, each group can have its own oversubscription ratio, pro-viding customers with deployment flexibil-ity. Each group dedicates 10 Gbps of switching bandwidth to be dynami-cally shared among the ports; queues are allocated within a 1MB oversubscription buffer based on the number of active ports in a group and the types of interfaces in-stalled. Users simply connect the cables and the EX8200-40XS automatically provisions each port group accordingly. No manual configuration is required.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
79 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Virtual Chassis Technology
The EX8200 supports Juniper Networks’ unique Virtual Chassis technology, which enables two interconnected chas-sis—any combination of EX8208s or EX8216s—to operate as a single, logical device with a single IP address. Deployed as a collapsed aggregation or core layer solution, an EX8200 Virtual Chassis configuration creates a network fabric for interconnecting access switches, routers and service-layer devices, such as firewalls and load balancers, using standards-based Ethernet LAGs. The network fabric cre-ated by an EX8200 Virtual Chassis configuration prevents loops, eliminating the need for protocols such as Spanning Tree. The fabric also simplifies the network by eliminating the need for Virtual Router Redundancy Protocol (VRRP),
increasing the scalability of the network design. EX8200 Virtual Chassis configurations are highly resilient, with no single point of failure, ensuring that no single element can render the entire fabric inoperable following a failure. In an EX8200 Virtual Chassis configuration, the Routing Engine functionality is externalized to a purpose-built, server-class appliance, the XRE200, which supports control plane pro-cessing requirements for large-scale systems and provides an extra layer of availability and redundancy. All control protocols such as OSPF, IGMP, Link Aggregation Control Protocol (LACP), 802.3ah and VCCP, as well as all manage-ment plane functions, run or reside on the XRE200.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
80 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Mellanox/Voltaire® VantageTM 6048
The Mellanox/Voltaire VantageTM 6048 switch is a high performance Layer 2 10GbE ToR/leaf switch optimized for enterprise data center and cloud com-puting environments, with 48 ports of 10GbE line-rate connectivity and 960 Gb/s non-blocking switching through-put.
The Mellanox/Voltaire VantageTM 6048 ToR switch was tested across all 48 ports of 10GbE. Its average latency ranged from a low of 2484 ns to a high of 2784 ns for layer 2 traffic. Its aver-age delay variation ranged between 4.8 and 9.6 ns, providing consistent latency across all packet sizes at full line rate.
For layer 3 traffic, the Mellanox/Voltaire VantageTM 6048’s average latency ranged from a low of 2504 to 2791 ns across all frame sizes. Its average delay variation for layer 3 traffic ranged between 5 and 9.5 ns, providing consistent latency across all packet sizes at full line rate.
Mellanox/Voltaire Vantage 6048 Test ConfigurationHardware Software Version Port Density
Device under test6048 http://www.voltaire.com
1.0.0.50 48
Test Equipment Ixia XM12 High Performance Chassis IxOS 6.00 EA
Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module
IxNetwork 5.70 EA
LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules
1U Application Server IxAutomate 6.90 GA SP1
http://www.ixiacom.com/
Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers
www.leviton.com
Mellanox/Voltaire® VantageTM 6048 RFC2544 Layer 2 Latency Test
Mellanox/Voltaire® VantageTM 6048 RFC2544 Layer 3 Latency Test
1500
2000
2500
3000
3500
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 2940 2940 2800 3240 3240 3140 3080 3100 3260
Avg Latency 2504.188 2513.625 2517.979 2650.021 2773.729 2791.313 2762.813 2766.958 2753.958
Min Latency 2420 2420 2380 2440 2400 2420 2420 2420 2420
Avg. delay Variation 8.583 5.313 5.188 7.667 6.958 5.479 9.583 5 9.313
ns
Max LatencyAvg LatencyMin Latency
1500
2000
2500
3000
3500
64 128 256 512 1024 1280 1518 2176 9216
Max Latency 2620 2720 2820 2960 3120 3040 3120 3060 3040
Avg Latency 2484.354 2495.521 2513.521 2617.563 2783.833 2776.146 2757.021 2759.813 2737.354
Min Latency 2400 2420 2380 2440 2420 2440 2420 2420 2420
Avg. delay Variation 8.625 5.208 5.146 7.479 6.979 4.875 9.688 4.958 9.417
ns
Max LatencyAvg LatencyMin Latency
Video feature: Click to view Mellanox/Voltaire video podcast
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
81 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Mellanox/Voltaire VantageTM 6048 demonstrated 100% throughput as a percentage of line rate across all 48-10GbE ports. In other words, not a single packet was dropped while the Mellanox/Voltaire VantageTM 6048 was presented with enough traffic to popu-late its 48 10GbE ports at line rate si-multaneously for both L2 and L3 traffic flows.
0%
20%
40%
60%
80%
100%
64 128 256 512 1024 1280 1518 2176 9216
Layer 2 100 100 100 100 100 100 100 100 100
Layer 3 100 100 100 100 100 100 100 100 100
Throughput% Line Rate
Layer 2 Layer 3
0
10
20
30
40
50
60
70
80
90
100
64 128 256 512 1024 1280 1518 2176 9216
Layer2 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
Layer 3 Agg. Forwarding Rate 100 100 100 100 100 100 100 100 100
% Line Rate
Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate
Head of Line Blocking no no no no no no no no noBack Pressure yes yes yes yes yes yes yes yes yes
L2 Agg Flow Control Frames 7152760 5201318 2711124 1721416 1756370 2110202 1807994 2168202 2079506
L3 Agg Flow Control Frames 7152846 5215032 2710746 1721416 1756238 2110308 1808060 2172256 2079658
Mellanox/Voltaire® VantageTM 6048 RFC 2544 L2 & L3 Throughput Test
Mellanox/Voltaire® VantageTM 6048 RFC 2889 Congestion Test
The Mellanox/Voltaire VantageTM 6048 demonstrated 100% of aggregated for-warding rate as percentage of line rate during congestion conditions for all packet sizes during the L2 and L3 for-warding test. A single 10GbE port was flooded at 150% of line rate. The Van-tageTM 6048 did not use HOL blocking, which means that as the 10GbE port on the VantageTM 6048 became congested, it did not impact the performance of other ports. Back pressure was pres-ent throughout all packets sizes as the Mellanox/Voltaire VantageTM 6048 sent flow control frames to the Ixia test gear, which is normal operation.
For layer 3-traffic, the VantageTM 6048 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for all packet sizes. No HOL blocking was present. Back pressure was pres-ent throughout all packets sizes as the Mellanox/Voltaire VantageTM 6048 sent flow control frames to the Ixia test gear, which is normal operation.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
82 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Mellanox/Voltaire® Vantage™ 6048 RFC 3918 IP Multicast Test
The Mellanox/Voltaire VantageTM 6048 does not support IP multicast.
The Mellanox/Voltaire VantageTM 6048 performed well under cloud simula-tion conditions by delivering 100% ag-gregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 11401 ns or 11 μs.
Mellanox/Voltaire Vantage™ 6048 Cloud Simulation Test
Traffic Direction Traffic Type Avg Latency (ns)
East-West Database_to_Server 11401
East-West Server_to_Database 2348
East-West HTTP 10134
East-West iSCSI-Server_to_Storage 2884
East-West iSCSI-Storage_to_Server 10454
North-South Client_to_Server 3366
North-South Server_to_Client 2197
Mellanox/Voltaire Vantage™ 6048 Power Consumption Test
WattsATIS/10GbE port 5.5
3-Year Cost/WattsATIS/10GbE $20.11
Total power cost/3-Year $965.19
3 yr energy cost as a % of list price 4.17%
TEER Value 172
Cooling Front to Back
The Mellanox/Voltaire VantageTM 6048 represents a new breed of cloud net-work leaf or ToR switches with power efficiency being a core value. Its Watt-sATIS/port is 5.5 and TEER value is 172. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consump-tion. Its power cost per 10GbE is esti-mated at $6.70 per year. The three-year cost to power the Mellanox/Voltaire VantageTM 6048 is estimated at $965.19 and represents approximately 4% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
83 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Discussion
The Mellanox/Voltaire VantageTM 6048 is a 48-port 10GbE ToR switch. Not tested in the Lippis/Ixia test are its claimed converged Ethernet capabilities to enable new levels of efficiency, scalability and real-time applica-tion performance, while at the same time consolidat-ing multiple/redundant network tiers and signifi-cantly reducing infrastructure expenses.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
84 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Top-of-Rack Switches Cross-Vendor Analysis
Between the three-industry Lippis/Ixia tests of December 2010, April 2011 and October 2011 there were eleven ToR switches evaluated for performance and power consump-tion. These participating vendors are:
Arista 7124SX 10G SFP Data Center Switch
Arista 7050S-64 10/40G Data Center Switch
Alcatel-Lucent OmniSwitch 6900-X40
Brocade VDXTM 6720-24 Data Center Switch
Brocade VDXTM 6730-32 Data Center Switch
Extreme Networks Summit® X670V
Dell/Force10 S-Series S4810
Hitachi Cable Apresia 15000-64XL-PSR
IBM BNT RackSwitch G8124
IBM BNT RackSwitch G8264
Mellanox/Voltaire® VantageTM 6048
All but Brocade utilize merchant silicon in their ToR switch. The rest utilize a new single chip design from Broadcom, Marvel or Fulcrum Microsystems. Brocade developed its own single chip silicon. With a single chip provided by a chip manufacturer, vendors are free to invest resources other than ASIC development, which can consume much of a company’s engineering and financial resources. With merchant silicon providing a forwarding engine for their switches, these vendors are free to choose where to inno-vate, be it in buffer architecture, network services such as virtualization support, OpenFlow, 40GbE uplink or fan-in support, etc. The Lippis/Ixia test results demonstrate that these new chip designs provide state-of-the-art perfor-mance at efficient power consumption levels not seen before. Further, price points on a 10GbE per port basis are a low of $351 to a high of $670.
IT business leaders are responding favorably to ToR switch-es equipped with a value proposition of high performance, low acquisition price and low power consumption. These ToR switches currently are the hot boxes in the industry with quarterly revenues for mid-size firms in the $10 to $15M plus. We compared each of the above firms in terms of their ability to forward packets: quickly (i.e., latency), without loss of their throughput at full line rate, when ports are oversubscribed with network traffic by 150 percent, in IP multicast mode and in cloud simulation. We also mea-sure their power consumption as described in the Lippis Test Methodology section above.
In the following ToR analysis we separate 24 port ToR switches from all others. There are four 24 port ToR switches:
Arista 7124SX 10G SFP Data Center Switch
Brocade VDXTM 6720-24 Data Center Switch
Brocade VDXTM 6730-32 Data Center Switch
IBM BNT RackSwitch G8124
The 48 port and above ToR switches evaluated are:
Arista 7050S-64 10/40G Data Center Switch
Alcatel-Lucent OmniSwitch 6900-X40
Extreme Networks Summit® X670V
Dell/Force10 S-Series S4810
Hitachi Cable Apresia 15000-64XL-PSR
IBM BNT RackSwitch G8264
Mellanox/Voltaire. VantageTM 6048
This approach provides a consistent cross-vendor analy-sis. We start with a cross-vendor analysis of 24 Port ToR switches followed by the 48 port and above devices. We end with a set of industry recommendations.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
85 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
IBM BNTRackSwitch
G8124
Brocade**VDX 6720-24
Brocade**VDX 6730-32
Brocade**VDX 6730-32
Layer 2 Layer 3
Arista***7124SX
IBM BNTRackSwitch
G8124
Brocade**VDX 6720-24
Arista***7124SX
* *
0
100
200
300
400
500
600
700
800
24 Port ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not partici-pate in layer 3 test.
* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not partici-pate in layer 3 test.
Layer 2 Layer 3Frames/Packet
Size (bytes)
IBM BNT* RackSwitch
G8124
Brocade VDX 6720-24
Arista 7124SX
Brocade VDX 6730-32
IBM BNT* RackSwitch
G8124
Brocade** VDX 6720-24 Arista 7124SX Brocade**
VDX 6730-32
64 651.75 527.04 541.08 521.17 642.71 ** 561.58 **
128 671.50 510.71 548.21 530.04 659.67 ** 582.33 **
256 709.46 570.08 527.92 577.92 701.29 ** 567.75 **
512 709.04 619.46 528.96 606.50 699.79 ** 570.29 **
1,024 708.25 615.54 531.58 605.17 699.00 ** 568.79 **
1,280 709.71 615.58 531.42 604.92 699.92 ** 568.54 **
1,518 707.93 614.21 531.04 607.75 698.50 ** 564.33 **
2,176 708.04 612.17 531.42 606.42 698.33 ** 563.83 **
9,216 706.63 610.96 532.79 629.00 696.92 ** 562.21 **
ns
We show average latency and average delay variation across all packet sizes for layer 2 and 3 forwarding. Measure-ments were taken from ports that were far away from each other to demon-
strate the consistency of performance across the single chip design. The stand out here is that these switch latencies are as low as 510ns to a high of 709ns. The Arista 7124SX layer 2 latency is
(smaller values are better) (smaller values are better)
Brocade VDX 6730-32
is a layer 2 switch
Brocade VDX 6720-24
is a layer 2 switch
All tested in Cut-Through Mode
the lowest measured in the Lippis/Ixia set of cloud networking test for packet sizes between 256-to-9216 Bytes. Also of note is the consistency in low latency across all packet sizes.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
86 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Arista7050S-64
ExtremeX670V
IBM BNTRackSwitch G8264
ExtremeX670V
Layer 2 Layer 3
Arista7050S-64
IBM BNTRackSwitch G8264
0
500
1000
1500
2000
Layer 2 Layer 3Frames/Packet
Size (bytes)Arista
7050S-64
IBM BNT RackSwitch
G8264
Extreme X670V
Arista 7050S-64
IBM BNT RackSwitch
G8264
Extreme X670V
64 914.33 1081.34 900.19 905.42 1075.39 907.92
128 926.77 1116.83 915.52 951.20 1121.02 943.14
256 1110.64 1214.92 1067.12 1039.14 1213.55 1029.73
512 1264.91 1374.25 1208.73 1191.64 1370.94 1176.92
1,024 1308.84 1422.00 1412.81 1233.00 1421.94 1375.02
1,280 1309.11 1421.28 1415.96 1232.45 1422.63 1373.87
1,518 1310.77 1421.75 1415.33 1232.48 1421.83 1374.08
2,176 1309.77 1419.89 1411.92 1229.92 1419.73 1372.71
9,216 1305.50 1415.36 1410.02 1225.58 1415.78 1370.33
ns
The Arista 7124SX, 7050S-64, Brocade VDXTM 6720-24, 6730-32, Extreme Summit X670V and IBM BNT G8264, G8124 switches were configured and tested via cut-through test method, while Dell/Force10, Mellanox/Voltaire and Apresia were configured and tested via store-and-forward method. This is due to the fact that some switches are store and forward while others are cut through devices. During store-and-for-ward testing, the latency of packet seri-alization delay (based on packet size) is removed from the reported latency number, by test equipment. Therefore, to compare cut-through to store-and-forward latency measurement, packet serialization delay needs to be added to store-and-forward latency number. For example, in a store-and-forward latency number of 800ns for a 1,518 byte size packet, the additional latency of 1240ns (serialization delay of a 1518
(smaller values are better) (smaller values are better)
Cut-Through Mode Testing
Cut-Through Mode Testing
byte packet at 10Gbps) is required to be added to the store-and-forward mea-surement. This difference can be signif-icant. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.
As in the 24 port ToR switches, we show average latency and average delay variation across all packet sizes for lay-er 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demonstrate the consistency of performance across
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
87 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
the single chip design. We separate cut-through and store and forward results. The Arista 7050S-64 offers approxi-mately 100ns less latency than the IBM BNT RackSwitch G8264.
As in the 24 port ToR switches, the 48 port and above ToR devices show slight difference in average delay varia-tion between Arista 7050S-64, Extreme X670V and IBM BNT RackSwitch G8264 thus proving consistent latency
under heavy load at zero packet loss for Layer 2 and Layer 3 forwarding. Differ-ence does reside within average latency between suppliers. Just as in the 24 port ToR switches, the range of average de-lay variation is 5ns to 10ns.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
88 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Dell/Force10S4810
Mellanox/Voltaire6048
Layer 2 Layer 3
Apresia15K
Alcatel-Lucent6900-X40
Alcatel-Lucent6900-X40
Dell/Force10S4810
Mellanox/Voltaire6048
Apresia15K
** ****0
500
1000
1500
2000
2500
3000ns
(smaller values are better)(smaller values are better)
Store & Forward Mode Testing
Store & Forward Mode Testing
Layer 2 Layer 3Frames/Packet
Size (bytes)Dell/Force10
S4810Mellanox/Voltaire
6048Apresia
15KAlcatel-Lucent
6900-X40Dell/Force10
S4810Mellanox/Voltaire
6048Apresia
15KAlcatel-Lucent
6900-X40
64 874.48 2484.35 ** 869.37 862.42 2504.19 ** 891.41
128 885.00 2495.52 941 880.46 868.54 2513.63 967.17 885.37
256 867.60 2513.52 980 860.74 863.77 2517.98 941.08 880.76
512 825.29 2617.56 950 822.48 822.63 2650.02 910.84 838.76
1,024 846.52 2783.83 974 833.37 843.02 2773.73 933.05 850.94
1,280 819.90 2776.15 945 808.98 817.83 2791.31 904.84 828.33
1,518 805.06 2757.02 933 796.07 802.38 2762.81 889.55 812.83
2,176 812.06 2759.81 940 802.65 809.71 2766.96 897.33 819.72
9,216 799.79 2737.35 *900.64 792.39 796.06 2753.96 *900.64 809.94
The Extreme X670V delivers the low-est latency at the smaller packet sizes. New to this category is the Alcatel-Lucent 6900-X40 which delivered the
* largest frame size supported on Apresia 15K is 9044. ** There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.
* largest frame size supported on Apresia 15K is 9044. ** There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.
lowest latency for layer 2 fowarding of all products in this cattory of store and forward ToR Switswitches. The Mel-lanox/Voltaire 6048 measured latency
is the highest for store and forward de-vices while Dell/Force10 S4810 was the lowest across nearly all packet sizes.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
89 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
24 Port ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
IBM BNT*RackSwitch
G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
Brocade**VDX 6730-32
Arista7124SX
IBM BNT*RackSwitch
G8124
Brocade**VDX 6720-24
Arista7124SX
Layer 2 Layer 3
**
0
2
4
6
8
10
12
* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 and Brocade VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.
* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 and Brocade VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.
Layer 2 Layer 3Frames/Packet
Size (bytes)
IBM BNT RackSwitch
G8124**
Brocade VDX 6720-24**
Arista 7124SX
Brocade VDX 6730-32**
IBM BNT RackSwitch
G8124*
Brocade VDX 6720-24**
Arista 7124SX
Brocade VDX 6730-32**
64 9.25 8.75 8.42 8.71 8.75 ** 9.96 **
128 5.08 5.38 5.96 5.46 5.67 ** 5.21 **
256 5.42 5.88 5.54 5.63 5.46 ** 5.42 **
512 7.88 7.75 7.67 7.71 8.09 ** 7.92 **
1,024 7.17 6.38 7.83 6.63 7.30 ** 6.88 **
1,280 5.25 8.25 5.92 4.75 7.84 ** 5.50 **
1,518 10.13 8.75 9.54 9.63 10.38 ** 9.25 **
2,176 5.17 5.17 6.67 5.17 5.13 ** 5.25 **
9,216 10.13 6.54 9.67 9.67 11.08 ** 9.33 **
ns
(smaller values are better) (smaller values are better)
Notice there is little difference in aver-age delay variation across all vendors proving consistent latency under heavy load at zero packet loss for L2 and L3
forwarding. Difference does reside within average latency between suppli-ers. The range of average delay variation is 5ns to 10ns.
Brocade VDX 6720-24
is a layer 2 switch
Brocade VDX 6730-32
is a layer 2 switch
All tested in Cut-Through Mode
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
90 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Arista7050S-64
IBM BNTRackSwitch G8264
ExtremeX670V
ExtremeX670V
Layer 2 Layer 3
Arista7050S-64
IBM BNTRackSwitch G8264
0
2
4
6
8
10
12
Layer 2 Layer 3Frames/Packet
Size (bytes)Arista
7050S-64
IBM BNT RackSwitch
G8264
Extreme X670V
Arista 7050S-64
IBM BNT RackSwitch
G8264
Extreme X670V
64 8.67 8.69 9.35 9.67 8.61 8.14
128 5.27 5.22 7.96 5.39 5.17 5.42
256 5.41 5.61 6.94 5.64 5.64 5.56
512 7.80 7.61 7.87 7.78 7.67 7.02
1,024 6.92 6.92 9.71 6.75 6.73 6.89
1,280 5.52 7.56 7.83 5.58 6.25 4.56
1,518 9.73 9.64 10.19 9.58 9.42 9.35
2,176 5.23 5.05 7.96 5.33 5.08 5.06
9,216 9.75 9.56 10.73 9.86 9.58 9.71
ns
(smaller values are better) (smaller values are better)
Cut-Through Mode Testing
Cut-Through Mode Testing
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
91 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
**Dell/Force10
S4810Mellanox/Voltaire
6048Alcatel-Lucent
6900-X40Alcatel-Lucent
6900-X40
Layer 2 Layer 3
Apresia*15K
Dell/Force10S4810
Mellanox/Voltaire6048
Apresia*15K
0
2
4
6
8
10
12
Layer 2 Layer 3Frames/Packet
Size (bytes)Dell/Force10
S4810Mellanox/Voltaire
6048Apresia
15KAlcatel-Lucent
6900-X40Dell/Force10
S4810Mellanox/Voltaire
6048Apresia
15KAlcatel-Lucent
6900-X40
64 9.10 8.63 ** 7.70 8.46 8.58 ** 7.20
128 5.23 5.21 8.00 5.48 5.25 5.31 5.23 5.89
256 5.46 5.15 7.00 5.70 5.52 5.19 5.61 5.44
512 7.94 7.48 8.00 7.96 7.69 7.67 7.72 7.65
1,024 6.73 6.98 9.00 7.41 6.69 6.96 6.64 7.00
1,280 6.00 4.88 7.00 3.33 4.65 5.48 6.05 3.50
1,518 8.98 9.69 10.00 10.33 9.06 9.58 9.23 9.13
2,176 5.00 4.96 8.00 5.11 5.08 5.00 5.02 4.94
9,216 9.27 9.42 *10.58 9.24 9.15 9.31 *9.67 9.37
ns
(smaller values are better) (smaller values are better)
* largest frame size supported on Apresia 15K is 9044. There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.
* largest frame size supported on Apresia 15K is 9044. There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.
Store & Forward Mode Testing
Store & Forward Mode Testing
The 48 port and above ToR devices la-tency were measured in store and for-ward mode and show slight difference in average delay variation between Alcatel-Lucent 6900-X40, Dell/Force10 S4810, Mellanox/Voltaire VantageTM
6048 and Apresia 15K. All switches demonstrate consistent latency under heavy load at zero packet loss for Layer 2 and Layer 3 forwarding. The Apresia 15K does demonstrate largest average delay variation across all packet sizes
while Alcatel-Lucent 6900-X40, Dell/Force10 and Mellanox/Voltaire are more varied. However, the range of av-erage delay variation is 5ns to 10ns.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
92 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
24 Port ToR Switches RFC 2544 Throughput Test
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
Arista7124SX
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
Arista7124SX
Layer 2 Layer 3
0%
20%
40%
60%
80%
100%
Layer 2 Layer 3Frames/Packet
Size (bytes)
IBM BNT RackSwitch
G8124
Brocade VDX 6720-24
Arista 7124SX
Brocade VDX 6730-32
IBM BNT RackSwitch
G8124
Brocade VDX 6720-24
Arista 7124SX
Brocade VDX 6730-32
64 100% 100% 100% 100% 100% * 100% *
128 100% 100% 100% 100% 100% * 100% *
256 100% 100% 100% 100% 100% * 100% *
512 100% 100% 100% 100% 100% * 100% *
1,024 100% 100% 100% 100% 100% * 100% *
1,280 100% 100% 100% 100% 100% * 100% *
1,518 100% 100% 100% 100% 100% * 100% *
2,176 100% 100% 100% 100% 100% * 100% *
9,216 100% 100% 100% 100% 100% * 100% *
% Line Rate
As expected all 24 port ToR switches are able to forward L2 and L3 packets at all sizes at 100% of line rate with zero pack-
et loss, proving that these switches are high performance wire-speed devices.
* Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.
* Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.
Brocade VDX 6720-24
is a layer 2 switch
Brocade VDX 6730-32
is a layer 2 switch
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
93 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2544 Throughput Test
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Arista7050S-64
IBM BNTRackSwitch
G8264
Dell/Force10S4810
Layer 2 Layer 3
Mellanox/Voltaire6048
Apresia15K
ExtremeX670V
Alcatel-Lucent
6900-X40
Arista7050S-64
IBM BNTRackSwitch
G8264
Dell/Force10S4810
Mellanox/Voltaire6048
Apresia15K
ExtremeX670V
Alcatel-Lucent
6900-X40
0%
20%
40%
60%
80%
100%
Layer 2 Layer 3Frames/Packet
Size (bytes)Arista
7050S-64
IBM BNT RackSwitch
G8264
Dell/Force10 S4810
Mellanox/Voltaire
6048
Apresia 15K
Extreme X670V
Alcatel- Lucent
6900-X40
Arista 7050S-64
IBM BNT RackSwitch
G8264
Dell/Force10 S4810
Mellanox/Voltaire
6048
Apresia 15K
Extreme X670V
Alcatel- Lucent
6900-X40
64 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
128 100% 100% 100% 100% 97.29% 100% 100% 100% 100% 100% 100% 100% 100% 100%
256 100% 100% 100% 100% 98.54% 100% 100% 100% 100% 100% 100% 100% 100% 100%
512 100% 100% 100% 100% 99.24% 100% 100% 100% 100% 100% 100% 100% 100% 100%
1,024 100% 100% 100% 100% 99.61% 100% 100% 100% 100% 100% 100% 100% 100% 100%
1,280 100% 100% 100% 100% 99.68% 100% 100% 100% 100% 100% 100% 100% 100% 100%
1,518 100% 100% 100% 100% 99.73% 100% 100% 100% 100% 100% 100% 100% 100% 100%
2,176 100% 100% 100% 100% 99.81% 100% 100% 100% 100% 100% 100% 100% 100% 100%
9,216 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
% Line Rate
With the exception of the Apresia 15K, as expected all switches are able to for-ward L2 and L3 packets at all sizes at
100% of line rate with zero packet loss, proving that these switches are high performance wire-speed devices.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
94 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
24 Port ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
IBM BNTRackSwitch
G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
Arista7124SX
IBM BNTRackSwitch
G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
Arista7124SX
Layer 2 Layer 3
0%
20%
40%
60%
80%
100%
Layer 2 Layer 3Frames/
Packet Size (bytes)
IBM BNT RackSwitch
G8124Brocade
VDX 6720-24Arista
7124SXBrocade
VDX 6730-32IBM BNT
RackSwitch G8124
Brocade VDX 6720-24
Arista 7124SX
Brocade VDX 6730-32
AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP
64 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
128 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
256 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
512 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
1,024 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
1,280 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
1,518 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
2,176 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
9,216 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * * * 100% N Y * * *
% Line Rate
* Brocade VDX 6720-24 & Brocade VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test. AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames.
All 24 port ToR switches performed as expected— that is no HOL block-ing was observed/measured during the L2 and L3 congestion test assuring that a congested port does not impact other ports, and thus the network, i.e.,
congestion is contained. In addition, the IBM BNT RackSwitch G8124 and Brocade VDX 6720-24 and VDX 6730-32 switches delivered 100% aggre-gated forwarding rate by implement-ing back pressure or signaling the Ixia
test equipment with control frames to slow down the rate of packets entering the congested port, a normal and best practice for ToR switches. While not evident in the above graphic, uniquely the Arista 7124SX did indicate to Ixia
Brocade VDX 6720-24
is a layer 2 switch
Brocade VDX 6730-32
is a layer 2 switch
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
95 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
24 Port ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
test gear that it was using back pres-sure, however there were no flow con-trol frames detected and in fact there were none. Ixia and other test equip-ment calculate back pressure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpass-es the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present.
Thanks to the 7124SX‘s generous and dynamic buffer allocation it can over-load ports with more packets than the MOL, therefore, the Ixia or any test equipment “calculates/sees” back pres-sure, but in reality this is an anomaly of the RFC testing method and not the 7124SX. The Arista 7124SX has a 2MB packet buffer pool and uses Dynamic Buffer Allocation (DBA) to manage congestion. Unlike other architectures
that have a per-port fixed packet mem-ory, the 7124SX can handle microbursts to allocate packet memory to the port that needs it. Such microbursts of traffic can be generated by RFC 2889 conges-tion tests when multiple traffic sources send traffic to the same destination for a short period of time. Using DBA, a port on the 7124SX can buffer up to 1.7MB of data in its transmit queue.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
96 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Arista7050S-64
IBM BNTRackSwitch
G8264
Dell/Force10S4810
Mellanox/Voltaire6048
Layer 2 Layer 3
Apresia*15K
ExtremeX670V
Alcatel-Lucent
6900-X40
Arista7050S-64
IBM BNTRackSwitch
G8264
Dell/Force10S4810
Mellanox/Voltaire6048
Apresia*15K
ExtremeX670V
Alcatel-Lucent
6900-X40
0%
20%
40%
60%
80%
100%
Layer 2 Layer 3Frames/
Packet Size (bytes)
Arista 7050S-64
IBM BNT RackSwitch
G8264
Dell/Force10 S4810
Mellanox/Voltaire
6048
Apresia 15K
Extreme X670V
Alcatel- Lucent
6900-X40
Arista 7050S-64
IBM BNT RackSwitch
G8264
Dell/Force10 S4810
Mellanox/Voltaire
6048
Apresia 15K
Extreme X670V
Alcatel- Lucent
6900-X40
AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP
64 100% N Y 100% N Y 100% N Y 100% N Y 77.9% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 77.9% N Y 100% N Y 100% N Y
128 100% N Y 100% N Y 100% N Y 100% N Y 79.1% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 77.9% N Y 100% N Y 100% N Y
256 100% N Y 100% N Y 100% N Y 100% N Y 78.6% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 77.8% N Y 100% N Y 100% N Y
512 100% N Y 100% N Y 100% N Y 100% N Y 78.6% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 78.2% N Y 100% N Y 100% N Y
1,024 100% N Y 100% N Y 100% N Y 100% N Y 79.0% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 67.9% Y N 100% N Y 100% N Y
1,280 100% N Y 100% N Y 100% N Y 100% N Y 79.0% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 50.1% Y N 100% N Y 100% N Y
1,518 100% N Y 100% N Y 100% N Y 100% N Y 78.6% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 50.0% Y N 100% N Y 100% N Y
2,176 100% N Y 100% N Y 100% N Y 100% N Y 49.9% Y N 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 50.0% Y N 100% N Y 100% N Y
9,216 100% N Y 100% N Y 100% N Y 100% N Y * 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * 100% N Y 100% N Y
% Line Rate
* largest frame size supported on Apresia 15K is 9044. AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames.
For the Arista 7050S-64, Alcatel-Lu-cent 6900-X40, IBM BNT RackSwitch G8164, Extreme X670V, Dell/Force10 S4810 and Mellanox/Volaire Vantag-etm 6048, these products performed as expected— that is no HOL blocking
during the L2 and L3 congestion test assuring that a congested port does not impact other ports, and thus the network, i.e., congestion is contained. In addition, these switches delivered 100% aggregated forwarding rate by
implementing back pressure or signal-ing the Ixia test equipment with control frames to slow down the rate of pack-ets entering the congested port, a nor-mal and best practice for ToR switches. While not evident in the above graphic,
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
97 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
uniquely the Arista 7050S-64 did indi-cate to Ixia test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other test equipment calculate back pres-sure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmit-ted frames at MOL (Maximum Offered Load) rate then back pressure is pres-ent.
Thanks to the 7050S-64‘s generous and dynamic buffer allocation it can over-load ports with more packets than the MOL, therefore, the Ixia or any test equipment “calculates/sees” back pres-sure, but in reality this is an anomaly
of the RFC testing method and not the 7050S-64. The Arista 7050S-64 is de-signed with a dynamic buffer pool al-location such that during a microburst of traffic as during RFC 2889 conges-tion test when multiple traffic sources are destined to the same port, packets are buffered in packet memory. Unlike other architectures that have fixed per-port packet memory, the 7050S-64 uses Dynamic Buffer Allocation (DBA) to allocate packet memory to ports that need it. Under congestion, packets are buffered in shared packet memory of 9 MBytes. The 7050S-64 uses DBA to al-locate up to 5MB of packet memory to a single port for lossless forwarding as observed during this RFC 2889 conges-tion test.
Apresia demonstrated HOL blocking at various packet sizes for both L3 and L2 forwarding. In addition, performance degradation beyond packet size 1518 during L2 congestion test and beyond packet size 512 during L3 congestion test were observed/detected.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
98 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
0%
20%
40%
60%
80%
100%
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Layer 2 Layer 3
ExtremeX670V
Alcatel-Lucent6900-X40
ExtremeX670V
Alcatel-Lucent6900-X40
Layer 2 Layer 3Frames/
Packet Size (bytes)
Extreme X670V
Alcatel- Lucent
6900-X40Extreme X670V
Alcatel- Lucent
6900-X40
AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP
64 100% N Y 100% N Y 100% N Y 100% N Y
128 100% N Y 100% N Y 100% N Y 100% N Y
256 100% N Y 100% N Y 100% N Y 100% N Y
512 100% N Y 100% N Y 100% N Y 100% N Y
1,024 100% N Y 100% N Y 100% N Y 100% N Y
1,280 100% N Y 100% N Y 100% N Y 100% N Y
1,518 100% N Y 100% N Y 100% N Y 100% N Y
2,176 100% N Y 100% N Y 100% N Y 100% N Y
9,216 100% N Y 100% N Y 100% N Y 100% N Y
% Line Rate
* largest frame size supported on Apresia 15K is 9044. AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames.
ToR Switches RFC 2889 Congestion Test 150% of line rate into single 40GbE
New to the fall 2011 Lippis/Ixia test is congestion testing at 40GbE. Both the Alcatel-Lucent 6900-X40 and Extreme X670V delivered 100% throuput as a percentate of line rate during conges-tion conditions thanks to back pressure or flow control. Neither switch exibited HOL blocking. These switches offer 40GbE congestion performance in line with their 10GbE uplinks, a major plus.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
99 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
24 Port ToR Switches RFC 3918 IP Multicast Test Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Arista7124SX
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24*
BrocadeVDX 6730-32*
Arista7124SX
BrocadeVDX 6730-32*
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24*
Throughput Agg Average Latency
0
500
1000
1500
0%
20%
40%
60%
80%
100%
Throughput (% Line Rate) Agg Average Latency (ns)Frames/Packet
Size (bytes)
IBM BNT RackSwitch
G8124
Brocade VDX 6720-24
Arista 7124SX
Brocade VDX 6730-32
IBM BNT RackSwitch
G8124
Brocade VDX 6720-24
Arista 7124SX
Brocade VDX 6730-32
64 100% 100% 650.00 562.82
128 100% 100% 677.00 590.09
256 100% 100% 713.00 558.44
512 100% 100% 715.00 570.91
1,024 100% 100% 713.00 561.63
1,280 100% 100% 725.00 569.22
1,518 100% 100% 707.00 569.73
2,176 100% 100% 713.00 569.17
9,216 100% 100% 712.00 565.65
% Line Rate ns
* Brocade does not support IP Multicast as it’s a layer 2 switch.
For the Arista 7124SX and IBM BNT RackSwitch G8124 these products per-formed as expected in RFC 3918 IP Multicast test delivering 100% through-put with zero packet loss. The Arista
7124SX demonstrated over 100ns less latency than the IBM BNT RackSwitch G8124 at various packet sizes. Notice that both the Arista 7124SX and IBM BNT RackSwitch G8124 demonstrated
consistently low latency across all packet sizes. The Brocade VDXTM 6720-24 and 6730-32 are layer 2 cut through switches and thus did not participate in the IP Multicast test.
(smaller values are better)
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Brocade does not support IP Multicast
as it’s a layer 2 switch
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
100 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 3918 IP Multicast Test Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Arista7050S-64
IBM BNTRackSwitch
G8264
ExtremeX670V
Mellanox/Voltaire*
6048
Apresia*15K
Arista7050S-64
IBM BNTRackSwitch
G8264
ExtremeX670V
Mellanox\Voltaire*
6048
Apresia*15K
Throughput Agg Average Latency
0%
20%
40%
60%
80%
100%
500
1000
1500
Throughput (% Line Rate) Agg Average Latency (ns)Frames/Packet
Size (bytes)Arista
7050S-64
IBM BNT RackSwitch
G8264
Mellanox/Voltaire
6048
Apresia 15K
Extreme X670V
Arista 7050S-64
IBM BNT RackSwitch
G8264
Mellanox/Voltaire
6048
Apresia 15K
Extreme X670V
64 100% 100% 100% 894.90 1059.00 955.82
128 100% 100% 100% 951.95 1108.00 968.43
256 100% 100% 100% 1072.35 1139.00 985.98
512 100% 100% 100% 1232.06 1305.00 998.16
1,024 100% 100% 100% 1272.03 1348.00 1011.00
1,280 100% 100% 100% 1276.35 1348.00 1012.53
1,518 100% 100% 100% 1261.49 1332.00 1010.16
2,176 100% 100% 100% 1272.46 1346.00 1009.82
9,216 100% 100% 100% 1300.35 1342.00 1009.86
% Line Rate ns
* Mellanox/Voltaire & Apresia do not suport IP Multicast at this time.
For the Arista 7050S-64, Extreme X670V, IBM BNT RackSwitch G8264 and Dell/Force10 S4810, these products performed as expected in RFC 3918 IP Multicast test delivering 100% through-put with zero packet loss. In terms of IP Multicast “average latency” between the Arista 7050S-64, Extreme X670V,
IBM BNT RackSwitch G8264 and Dell/Force10 S4810 their data is similar with higher latencies at larger packet sizes. The Extreme X670V offers the lowest IP Multicast latency for packet sizes greater than 128 bytes by nearly 300ns. There is a slight difference in aggregate average latency between the Arista 7050S-64 and
IBM BNT RackSwitch G8264 with Aris-ta offering 100 ns less latency. Arista, IBM and Extreme ToR switches can be configured in a combination of 10GbE ports plus 2 or 4 40GbE uplink ports or 64 10GbE ports. Mellanox/Voltaire and Apresia does not support IP Multicast at this time.
(smaller values are better)
Mellanox/Voltaire and Apresia do not
support IP Multicast at this time.
Mellanox/Voltaire and Apresia do
not support IP Multicast at this time.
Mellanox/Voltaire and Apresia do
not support IP Multicast at this time.
Mellanox/Voltaire and Apresia do
not support IP Multicast at this time.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
101 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Switches RFC 3918 IP Multicast Test - Cut Through ModeFall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Dell/Force10S4810
Alcatel-Lucent6900-X40
Dell/Force10S4810
Alcatel-Lucent6900-X40
Throughput Agg Average Latency
0%
20%
40%
60%
80%
100%
500
1000
1500
Layer 2 Layer 3
Frames/Packet Size (bytes)
Dell/Force10 S4810
Alcatel-Lucent 6900-X40
Dell/Force10 S4810
Alcatel-Lucent 6900-X40
64 100% 100% 984.00 1021.98
128 100% 100% 1033.00 1026.76
256 100% 100% 1062.00 1043.53
512 100% 100% 1232.00 1056.07
1,024 100% 100% 1442.00 1057.22
1,280 100% 100% 1442.00 1052.11
1,518 100% 100% 1429.00 1055.53
2,176 100% 100% 1438.00 1054.58
9,216 100% 100% 1437.00 1055.64
% Line Rate ns
The Alcatel-Lucent 6900-X40 and Dell/Force10 S4810 were tested in cut-through mode and performed as expect-ed in RFC 3918 IP Multicast test deliver-ing 100% throughput with zero packet loss. In terms of IP Multicast “average latency” between the Alcatel-Lucent
6900-X40 delivered lower latencies at all packet sizes except 64 bytes than the Dell/Force10 S4810. But at the higher packet range, which is the most impor-tant the Alcatel-Lucent 6900-X40 offers more than 300ns faster forwarding than the Dell/Force10 S4810.
(smaller values are better)
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
102 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Company Product EW Database_to_Server
EW Server_to_Database EW HTTP
EW iSCSI-Server_to_
Storage
EW iSCSI-Storage_to_
Server
NS Client_to_Server
NS Server_to_Client
Mellanox/Voltaire 6048 11401 2348 10134 2884 10454 3366 2197
Dell/Force10 S4810 6890 1010 4027 771 6435 1801 334
Brocade VDXTM 6720-24 7653 675 4805 1298 7901 1666 572
Arista 7124SX 7439 692 4679 1256 7685 1613 530
Arista 7050S-64 8173 948 4609 1750 8101 1956 1143Brocade VDXTM 6730-32 7988 705 4852 1135 8191 1707 593
Extreme X670V 2568 1354 2733 1911 3110 1206 1001
Alcatel-Lucent 6900-X40 3719 1355 2710 2037 3085 995 851
Cloud Simulation ToR Switches Zero Packet Loss: Latency Measured in ns
Cloud Simulation ToR Switches Zero Packet Loss: Latency Measured in ns
The Alcatel-Lucent 6900-X40, Arista 7124SX and 7050S-64, Brocade VDXTM 6720-24 and VDXTM 6730-32, Extreme X670V, Dell/Force10 S4810 and Mella-nox/Voltaire VantageTM 6048 delivered 100% throughput with zero packet loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
EW Database_to_Server EW Server_to_Database EW HTTP EW iSCSI-Server_to_Storage
EW iSCSI-Storage_to_Server NS Client_to_Server NS Server_to_Client
Mellanox/Voltaire 6048
Dell/Force10S4810
BrocadeVDX-6720-24
Arista7124SX
Arista7050S-64
BrocadeVDX-6730-32
ExtremeX670V
Alcatel-Lucent6900-X40
0
3,000
6,000
9,000
12,000
ns
(smaller values are better)
and nanosecond latency during the cloud simulation test. Both the Alcatel-Lucent 6900-X40 and Extreme X670V offer the lowest overall latency in cloud simulation measured thus far. All demonstrated spikes at EW Database-
to-Server, EW HTTP and EW iSCSI-Storage-to-Server cloud protocols. The Mellanox/Voltaire VantageTM 6048 added between 600ns and 300ns to all others.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
103 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Power Consumption 24 Port ToR Switches
Arista7124SX
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
01234567
Arista7124SX
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
$0
$5
$10
$15
$20
$25
Arista7124SX
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
0
200
400
600
800
Arista7124SX
IBM BNTRackSwitch G8124
BrocadeVDX 6720-24
BrocadeVDX 6730-32
0%
1%
2%
3%
4%
5%
3-Year Cost/WattsATIS/10GbE
WattsATIS/10GbE Port
TEER
3-Year Energy Cost as a % of List Price
The Brocade VDXTM 6730-32 is the standout in this group of ToR switches with a low 1.5 WATTSATIS and a TEER value of 631, the lowest power con-sumption measured in these industry test thus far. Note that the Brocade VDXTM 6730-32 was equipped with only one power supply. Also VDXTM 6730-32 supports 24 10GbE ports and 8 2/4/8 Gbps FCoE ports. This test popu-lated the VDXTM 6730-32 only with 24 10GbE ports. If the FCoE ports were operational power consumption would rise.
The Arista 7124SX and IBM BNT RackSwitch G8124 did not differ mate-rially in their power consumption and 3-Year energy cost as a percent of list pricing. All 24 port ToR switches are highly energy efficient with high TEER values and low WATTSATIS.
Company Product WattsATIS/portCost/
WattsATIS/Year
3 yr Energy Cost per
10GbE Port TEER
3 yr Energy Cost as a % of List Price Cooling
IBM BNT Rack Switch G8124 5.5 $6.70 $20.11 172 4.04% Front-to-Back
Brocade VDXTM 6720-24 3.1 $3.72 $11.15 310 2.24% Front-to-Back, Back-to-Front
Arista 7124SX 6.8 $8.24 $24.71 140 4.56% Front-to-Back, Back-to-Front
Brocade VDXTM 6730-32 1.5 $1.83 $5.48 631 0.82% Front-to-Back, Back-to-Front
(larger values are better)
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
104 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Power Consumption ToR Switches
Arista7050S-64
IBM BNTRackSwitch
G8264
Force10S4810
Voltaire6048
Apresia15K
ExtremeX670V
Alcatel-Lucent
6900-X40
01234567
Arista7050S-64
IBM BNTRackSwitch
G8264
Force10S4810
Voltaire6048
Apresia15K
ExtremeX670V
Alcatel-Lucent
6900-X40
$0
$5
$10
$15
$20
$25
Arista7050S-64
IBM BNTRackSwitch
G8264
Force10S4810
Voltaire6048
Apresia15K
ExtremeX670V
Alcatel-Lucent
6900-X40
0
100
200
300
400
500
Arista7050S-64
IBM BNTRackSwitch
G8264
Force10S4810
Voltaire6048
Apresia15K
ExtremeX670V
Alcatel-Lucent
6900-X40
0%
1%
2%
3%
4%
5%
3-Year Cost/WattsATIS/10GbE
WattsATIS/10GbE Port
TEER
3-Year Energy Cost as a % of List Price
The Arista 7050S-64 is the standout in this group of ToR switches with a low 2.3 WATTSATIS and a TEER value of 404, both of which were the lowest and highest we have ever measured making the Arista 7050S-64 the most power ef-ficient 48 port + ToR switch measured to date. A close second is a near tie with the Extreme Networks X670V and Alcatel-Lucent 6900-X40 at 3.2 and 3.4 WATTSATIS respectively and 295 and 280 TEER values respectively. All switches are highly energy efficient ToR switches with high TEER values and low WATTSATIS.
While there is difference in the three-year energy cost as a percentage of list price, this is due to the fact that these products were configured differently during the Lippis/Ixia test with the Aris-ta 7050S-64 and IBM BNT RackSwitch G8264 configured as 64 10GbE while the Dell/Force10 S4810 configured as 48 10GbE ports with subsequent dif-ferent list pricing. The Extreme X670V was configured with 48-10GbE and 4-40GbE ports. The Alcatel-Lucent 6900-X40 was configured with 40-10GbE and 6-40GbE ports.
Hitachi Cable was unable to provide list pricing information thus there is no data on the Apresia 15K’s power cost per 10GbE, three-year energy cost as a percentage of list price.
Company / Product WattsATIS/portCost/
WattsATIS/port
3 yr Energy Cost per
10GbE Port TEER
3 yr Energy Cost as a % of List Price Cooling
Arista 7050S-64 2.3 $2.85 $8.56 404 1.82% Front-to-Back, Back-to-Front
IBM BNT Rack Switch G8264 3.9 $4.78 $10.75 241 4.08% Front-to-Back
Dell/Force10 S4810 4.0 $4.91 $14.73 235 2.83% Front-to-Back
Mellanox/Voltaire 6048 5.5 $6.70 $20.11 172 4.17% Front-to-Back
Apresia* 15K 3.6 $13.09 $13.09 264 Front-to-Back
Extreme X670V 3.2 $3.91 $11.74 295 2.21% Front-to-Back
Alcatel-Lucent OmniSwitch 6900-X40 3.4 $4.12 $12.36 280 2.82% Front-to-Back
(larger values are better)
* Apresia did not provide list pricing
* Apresia did not provide list pricing
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
105 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
ToR Industry Recommendations
The following provides a set of recommendations to IT business leaders and network architects for their consider-ation as they seek to design and build their private/public data center cloud network fabric. Most of the recommenda-tions are based upon our observations and analysis of the test data. For a few recommendations, we extrapolate from this baseline of test data to incorporate key trends and how these ToR switches may be used in their support for corpo-rate advantage.
10GbE ToR Switches Ready for Deployment: 10GbE ToR switches are ready for prime time, delivering full line rate throughput at zero packet loss and nanosecond latency plus single to double-digit delay variation. We are now in the era of 500ns latency ToR switching. In addition, these ToR switches offer low power consumption with energy cost over a three-year period estimated between 1.8% and 4% of acquisition cost.
Evaluate Each ToR Switch Separately: While this set of ToR switch category tested very well, there are differences between vendors especially in the congestion and cloud simulation test. Therefore, it is recommended to review each supplier’s results and make purchase decisions accordingly.
Deploy ToR Switches that Demonstrate Efficient Power and Cooling: In addition to high TEER values and low WATTSATIS all ToR switches tested support front-to-back or rear-to-front cooling in support of data center hot/cold aisle airflow designs. In fact some offer a reversible option too. Therefore, it is recommended that these ToR switches can be used as part of an overall green data center initiative.
Ready for Storage Enablement: Most of the ToR switches demonstrated the performance and latency required to support storage enablement or converged I/O. In fact, all suppliers have invested in storage enablement, such as sup-port for CEE, FCoE, iSCSI, ATA, NAS, etc., and while these features were not tested in the Lippis/Ixia evaluation, these switches demonstrated that the raw capacity is built into the switches for its support. In fact some are now offering storge and ethernet ports.
Evaluate ToR Switches for Network Services and Fabric Differences: There are differences between suppliers in terms of the network services they offer as well as how their ToR switches connect to Core/Spine switches to create a data center fabric. It is recommended that IT business lead-ers evaluate ToR switches with Core switches to assure that the network services and fabric attributes sought after are realized.
Single Chip Design Advantages: With most, but not all ToR switches being designed with merchant silicon and proving their performance and power consumption advan-tages, expect many other suppliers to introduce ToR switch-es based upon this design. Competitive differentiation will quickly shift toward network operating system and network services support especially virtualization aware, unified net-working and software defined networking with OpenFlow.
Connect Servers at 10GbE: The Lippis/Ixia test demon-strated the performance and power consumption advan-tages of 10GbE networking, which can be put to work and exploited for corporate advantage. For new server deploy-ments in private/public data center cloud networks, 10GbE is recommended as the primary network connectivity service as a network fabric exists to take full advantage of server I/O at 10GbE bandwidth and latency levels.
40GbE Uplink Options Become Available: The Alcatel-Lu-cent 6900-X40, Arista 7050S-64, Extreme Networks X670V, IBM BNT RackSwitch G8264 and Dell/Force10 S4810 support a range of 40GbE ports and configuration options. The Lippis/Ixia evaluation tested the Arista 7050S-64 and IBM BNT RackSwitch G8264 using its 40GbE as a fan-in of 4 10GbE to increase 10GbE port density to 64. The Alcatel-Lucent 6900-X40 and Extreme Networks X670V we tested with six and four 40GbE links for all test, a first in these indsutry test. These options offer design flexibility at no cost to performance, latency or power consumption, and thus we recommend its use. Further, in the spring of 2012 we expect many new 40GbE core switches to be available to accept these 40GbE uplinks.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
106 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core/Spine Cross-Vendor Analysis
There were four Core/Spine Switches evaluated for per-formance and power consumption in the Lippis/Ixia test. These participating vendors were:
Alcatel-Lucent OmniSwitch 10K
Arista 7504 Series Data Center Switch
Extreme Networks BlackDiamond® X8
Juniper Network EX Series EX8200 Ethernet Switch
These switches represent the state-of-the-art of computer network hardware and software engineering, and are central to private/public data center cloud computing infrastruc-ture. If not for this category of Ethernet switching, cloud computing would not exist. The Lippis/Ixia public test was the first evaluation for every Core switch tested. Each supplier’s Core switch was evaluated for its fundamental performance and power consumption features. The Lippis/Ixia test results demonstrate that these new Core switches provide state-of-the-art performance at efficient power con-sumption levels not seen before. The port density tested for these Core switches ranged from 128 10GbE ports to a high of 256 10GbE and for the first time 24-40GbE ports for the Extreme Networks BlackDiamond® X8
IT business leaders are responding favorably to Core switches equipped with a value proposition of high per-formance, high port density, competitive acquisition cost, virtualization aware services, high reliability and low power consumption. These Core switches currently are in high demand with quarterly revenues for mid-size firms in the $20 to $40M plus range. The combined market run rate for both ToR and Core 10GbE switching is measured in the multi-billion dollar range. Further, Core switch price points on a 10GbE per port basis are a low of $1,200 to a high of $6,093. 40GbE core ports are priced between 3 and 4 times that of 10GbE core ports. Their list price varies from $230K to $780K with an average order usually being in the million plus dollar range. While there is a large difference in list price as well as price per port between vendors, the reason is found in the number of network services supported by the various suppliers and 10GbE/40GbE port density.
We compare each of the above firms in terms of their ability to forward packets: quickly (i.e., latency), without loss or their throughput at full line rate, when ports are oversub-scribed with network traffic by 150%, in IP multicast mode and in cloud simulation. We also measure their power consumption as described in the Lippis Test Methodology section.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
107 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X8216 Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X8216
Layer 2 Layer 3
400
number of10GbE portspopulated
350
300
250
200
100
50
0
150
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
Layer 2 Layer 3Frames/
Packet Size (bytes)
Alcatel-Lucent* OmniSwitch 10K
Arista 7504
Juniper EX8126
Extreme BDX8
Alcatel-Lucent* OmniSwitch 10K
Arista 7504
Juniper EX8126
Extreme BDX8
64 20864.00 17791.20 11891.60 2273.50 20128.00 9994.88 12390.38 2273.30
128 20561.00 6831.50 11366.20 2351.80 20567.00 6821.32 12112.50 2354.00
256 20631.00 7850.70 11439.60 2493.60 20638.00 7858.69 11814.05 2492.90
512 20936.00 8452.00 11966.80 2767.60 20931.00 8466.53 12685.41 2766.30
1,024 21216.00 8635.50 13322.90 3331.50 21211.00 8647.96 13713.25 3324.30
1,280 21787.00 8387.00 14131.80 3588.80 21793.00 8387.26 14318.48 3581.00
1,518 22668.00 8122.10 14789.60 3830.30 22658.00 8112.95 14664.84 3826.50
2,176 25255.00 8996.00 16302.70 4542.80 25242.00 8996.55 15767.44 4504.10
9,216 36823.00 12392.80 34722.70 12095.10 45933.00 12387.99 28674.08 11985.90
ns
We show average latency and aver-age delay variation across all packet sizes for layer 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demonstrate the consistency of performance between modules and across their backplanes.
The clear standout here is the Extreme Networks BlackDiamond® X8 latency measurements which are 2 to 9 times faster at forwarding packets than all others.
Another standout here is the Arista 7504‘s large latency at 64Byte size pack-
(smaller values are better) (smaller values are better)
* RFC2544 was conducted on the OmniSwitch 10K via default configuration, however Alcatel-Lucent has an optimized configuration for low latency networks which it says improves these results.
ets. According to Arista, the Arista 7500 series performance and design are fine tuned for applications its custom-ers use. These applications use mixed packet sizes, rather than an artificial stream of wire speed 64 byte packets.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
108 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss Fall 2010
Lippis/Ixia TestSpring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Layer 2 Layer 3
Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X8216 Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X82160
2
4
6
8
10
12
Layer 2 Layer 3Frames/
Packet Size (bytes)
Alcatel-Lucent* OmniSwitch 10K
Arista 7504
Juniper EX8126
Extreme BDX8
Alcatel-Lucent* OmniSwitch 10K
Arista 7504
Juniper EX8126
Extreme BDX8
64 9.00 9.00 9.00 8.30 9.00 9.00 9.00 9.98
128 5.00 5.00 5.00 5.40 5.00 5.00 5.00 5.50
256 5.00 6.00 5.40 5.40 5.00 5.95 5.11 5.40
512 8.00 8.00 7.90 7.60 8.00 8.00 8.00 7.60
1,024 7.00 7.00 7.00 7.10 7.00 7.00 7.00 7.20
1,280 5.00 6.00 5.10 5.60 4.00 6.00 5.00 5.50
1,518 10.00 10.00 9.50 9.30 10.00 10.00 9.56 9.30
2,176 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
9,216 10.00 10.50 9.90 9.10 10.00 10.60 10.00 9.10
ns
(smaller values are better) (smaller values are better)
Notice there is little difference in average delay variation across all vendors proving consistent latency under heavy load at zero packet loss for L2 and L3 forwarding. Average
delay variation is in the 5 to 10 ns range. Difference does reside within average latency between suppliers, thanks to different approaches to buffer management and port densities.
* RFC2544 was conducted on the OmniSwitch 10K via default configuration, however Alcatel-Lucent has an optimized configuration for low latency networks which it says improves these results.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
109 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switches RFC 2544 Throughput Test
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Layer 2 Layer 3
Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X8216 Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X82160%
20%
40%
60%
80%
100% 400
number of10GbE portspopulated
350
300
250
200
100
50
0
150
Layer 2 Layer 3Frames/
Packet Size (bytes)
Alcatel-Lucent OmniSwitch
10K
Arista 7504
Juniper EX8126
Extreme BDX8
Alcatel-Lucent OmniSwitch
10K
Arista 7504
Juniper EX8126
Extreme BDX8
64 100% 100% 100% 100% 100% 100% 100% 100%
128 100% 100% 100% 100% 100% 100% 100% 100%
256 100% 100% 100% 100% 100% 100% 100% 100%
512 100% 100% 100% 100% 100% 100% 100% 100%
1,024 100% 100% 100% 100% 100% 100% 100% 100%
1,280 100% 100% 100% 100% 100% 100% 100% 100%
1,518 100% 100% 100% 100% 100% 100% 100% 100%
2,176 100% 100% 100% 100% 100% 100% 100% 100%
9,216 100% 100% 100% 100% 100% 100% 100% 100%
% Line Rate
As expected, all switches are able to forward L3 packets at all sizes at 100% of line rate with zero packet loss, prov-
ing that these switches are high per-formance devices. Note that while all deliver 100% throughput the Extreme
BlackDiamond® X8 was the only device configured with 24-40GbE ports plus 256-10GbE ports.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
110 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switches RFC 2889 Layer 2 Congestion Test 150% of Line Rate into Single 10GbE
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Layer 2 Layer 3
Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X8216 Alcatel-LucentOmniSwitch 10K
Extreme BDX8Arista 7504 Juniper X82160%
20%
40%
60%
80%
100%
% Line Rate
All Core switches performed extremely well under congestion conditions with nearly no HOL blocking and between 77 to 100% aggregated forwarding as a percentage of line rate. It’s expected that at 150% of offered load to a port, that that Core switch’s port would show 33%
loss if it’s receiving at 150% line rate and not performing back pressure or flow control.
A few standouts arose in this test. First, Arista’s 7504 delivered nearly 84% of aggregated forwarding rate as percent-
age of line rate during congestion con-ditions for both L2 and L3 traffic flows, the highest for all suppliers without utilizing flow control. This is due to its generous 2.3GB of buffer memory de-sign as well as its VOQ architecture. Note also that Arista is the only Core
Layer 2 Layer 3Frames/
Packet Size (bytes)
Alcatel-Lucent OmniSwitch 10K
Arista 7504
Juniper EX8126
Extreme BDX8
Alcatel-Lucent OmniSwitch 10K
Arista 7504
Juniper EX8126
Extreme BDX8
AFR% HOL BP AFR% HOL BP* AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP* AFR% HOL BP AFR% HOL BP
64 77.80% N N 83.34% N Y 77.01% N N 100% N Y 77.80% N N 84.34% N Y 77.81% N N 100% N Y
128 77.80% N N 83.34% N Y 75.01% N N 100% N Y 77.80% N N 84.34% N Y 77.84% N N 100% N Y
256 77.80% N N 83.35% N Y 75.02% N N 100% N Y 77.80% N N 84.35% N Y 77.83% N N 100% N Y
512 77.80% N N 83.35% N Y 75.03% N N 100% N Y 77.80% N N 84.35% N Y 74.20% N N 100% N Y
1,024 77.80% N N 83.36% N Y 76.06% N N 100% N Y 77.80% N N 84.36% N Y 72.78% N N 100% N Y
1,280 77.80% N N 83.36% N Y 76.06% N N 100% N Y 77.80% N N 84.36% N Y 77.87% N N 100% N Y
1,518 77.80% N N 83.36% N Y 75.12% N N 100% N Y 77.80% N N 84.36% N Y 77.69% N N 100% N Y
2,176 77.90% N N 83.36% N Y 76.06% N N 100% N Y 77.90% N N 84.36% N Y 77.91% N N 100% N Y
9,216 78.40% N N 83.36% N Y 75.06% N N 100% N Y 78.40% N N 78.21% Y N 78.16% N N 100% N Y
AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure* Note that while the Arista’s 7504 shows back pressure, in fact there is none. All test equipment including IXIA calculates back pressure per RFC 2889 paragraph 5.5.5.2. which states that
if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7504‘s 2.3GB of packet buffer memory it can overload ports with more packets than the MOL, therefore, IXIA or any test equipment “calculates/sees” back pressure, but in reality this is an anomaly of the RFC testing method and not the 7504. The Arista 7504 can buffer up 40ms of traffic per port at 10GbE speeds which is 400K bits or 5,425 packets of 9216 bytes.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
111 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
switch that showed HOL blocking for 9216 byte size packets at L3. Note the 7504 was in beta testing at the time of the Lippis/IXIA test and there was a “corner case” at 9216 bytes at L3 that
needed further tuning. Arista states that its production code provides wire-speed L2/L3 performance at all packet sizes without any head of line blocking
Second, Alcatel-Lucent and Juniper delivered consistent and perfect perfor-mance with no HOL blocking or back pressure for all packet sizes during both L2 and L3 forwarding.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
112 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switches RFC 2889 Layer 2 Congestion Test 150% of Line Rate into Single 40GbE
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes
1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Layer 2 Layer 3
Extreme BDX8Extreme BDX80%
20%
40%
60%
80%
100%
% Line Rate
The Extreme Networks BDX8 demon-strated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. We tested con-gestion in both 10GbE and 40GbE con-figurations, a first for these test.
The Extreme Networks BDX8 sent flow control frames to the Ixia test gear sig-naling it to slow down the rate of in-
coming traffic flow. We anticpate that this will become a new feature that most core switch vendors will adopt as it provides reliable network forward-ing when the network is the data cen-ter fabric supporting both storagte and data gram flows.
Layer 2 Layer 3Frames/
Packet Size (bytes)
Extreme BDX8
Extreme BDX8
AFR% HOL BP AFR% HOL BP
64 100% N Y 100% N Y
128 100% N Y 100% N Y
256 100% N Y 100% N Y
512 100% N Y 100% N Y
1,024 100% N Y 100% N Y
1,280 100% N Y 100% N Y
1,518 100% N Y 100% N Y
2,176 100% N Y 100% N Y
9,216 100% N Y 100% N Y
AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure* Extreme’s BDX8 was the only core switch to utilize back pressure to slow down the rate of traffic
flow so as to maintain 100% throughput
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
113 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switches RFC 3918 IP Multicast Test Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes
Alcatel-LucentOmniSwitch 10K
Arista 7504 Juniper X8216 Extreme BDX8 Alcatel-LucentOmniSwitch 10K
Arista 7504 Juniper X8216 Extreme BDX8
Throughput Agg Average Latency
0%
20%
40%
60%
80%
100%
0
10,000
20,000
30,000
40,000
50,000
60,000
Throughput (% Line Rate) Agg Average Latency (ns)Frames/
Packet Size (bytes)
Alcatel-Lucent OmniSwitch
10K
Arista 7504*
Juniper EX8126
Extreme BDX8
Alcatel-Lucent OmniSwitch
10K
Arista 7504*
Juniper EX8126
Extreme BDX8
64 100% 100% 100% 9596 27210 2452
128 100% 100% 100% 9646 26172 2518
256 100% 100% 100% 9829 25135 2648
512 100% 100% 100% 10264 26600 2904
1,024 100% 100% 100% 11114 28802 3417
1,280 100% 100% 100% 11625 29986 3668
1,518 100% 100% 100% 11921 30746 3915
2,176 100% 100% 100% 13023 33027 4574
9,216 100% 100% 100% 28059 60055 11622
% Line Rate ns
* Arista does not suport IP Multicast at this time.
At the time of testing in December 2010, the Arista 7504 did not support IP Mul-ticast. Both the 7504 and 7508 now do with EOS 4.8.2. The Extreme Networks BDX8 offers the lowest IP Multicast la-tency at the highest port density tested
to date in these industry test. The Ex-treme Networks BDX8 is the fastest at forwarding IP Multicast thanks to its replication chip design delivering forwarding speeds that are 3-10 times faster than all others. Alcatel-Lucent’s
(smaller values are better)
Arista does not support IP Multicast at this time.
Arista does not support IP Multicast at this time.
Arista did not support IP
Multicast at the time of testing in December 2010. Both the 7504
and 7508 now do with EOS 4.8.2.
Arista did not support IP
Multicast at the time of testing in December 2010. Both the 7504
and 7508 now do with EOS 4.8.2.
OmniSwitch 10K demonstrated 100% throughput at line rate. Juniper’s EX8216 demonstrated 100% throughput at line rate and the IP multicast aggregated av-erage latency twice that of Alcatel-Lu-cent’s OmniSwitch.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
114 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Company Product
EW Database_to_Server
EW Server_to_Database EW HTTP
EW iSCSI-Server_to_
Storage
EW iSCSI-Storage_to_
ServerNS Client_to_Server
NS Server_to_Client
Alcatel-Lucent OmniSwitch 10K 28125 14063 19564 17140 26780 13194 11225
Arista 7504 14208 4394 9171 5656 13245 5428 4225
Juniper EX8216 30263 11614 18833 19177 26927 12145 11322
Extreme* BDX8 3773 1339 4887 2097 3135 3368 3808
Cloud Simulation Core Switches Zero Packet Loss: Latency Measured in ns
Cloud Simulation Core Switches Zero Packet Loss: Latency Measured in ns
All Core switches delivered 100% throughput with zero packet loss during cloud simulation test. Alcatel-Lucent’s and Juniper’s average latencies for vari-
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
EW Database_to_Server EW Server_to_Database EW HTTP EW iSCSI-Server_to_Storage
EW iSCSI-Storage_to_Server NS Client_to_Server NS Server_to_Client
Alcatel-Lucent OmniSwitch 10K Arista 7504 Juniper EX8216 Extreme BDX8*0
5,000
10,000
15,000
20,000
25,000
30,000
ns
ous traffic flows were consistent with each other while Arista’s latency was measured, at specific traffic types, sig-nificantly lower. But the clear standout
(smaller values are better)
* Extreme Networks BDX8 was measured in Store & Forward plus Cut-Through Mode. For cut-through cloud simulation latency results see the X8 write-up.
* Extreme Networks BDX8 was measured in Store & Forward plus Cut-Through Mode. For cut-through cloud simulation latency results see the X8 write-up.
here is the Extreme Networks BDX8 that forwards a range of cloud protocols 4 to 10 times faster than others tested.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
115 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Power Consumption Core Switches
0
5
10
15
20
25
Alcatel -LucentOmniSwitch 10K
Arista 7504 Juniper EX8216 Extreme BDX8
$0
$20
$40
$60
$80
Alcatel -LucentOmniSwitch 10K
Arista 7504 Juniper EX8216 Extreme BDX8
020406080
100120
Alcatel -LucentOmniSwitch 10K
Arista 7504 Juniper EX8216 Extreme BDX8
0%
1%
2%
3%
4%
Alcatel -LucentOmniSwitch 10K
Arista 7504 Juniper EX8216 Extreme BDX8
3 yr Energy Cost/10GbE port
TEER
There are differences in the power efficiency across these Core switches. The Alcatel-Lucent OmniSwitch 10K, Arista 7540, Extreme X8 and Juniper EX8216 represent a breakthrough in power efficiency for Core switches with previous generation of switches consuming as much as 70 WATTSATIS. These switches consume between 8 and 21 WATTSATIS with TEER values between 117 and 71; remember the higher the TEER value the better.
The standout here is Extreme Networks BDX8 with the lowest WATTSATIS and highest TEER values measured thus far for Core swithes in these Lippis/Ixia test. The Extreme Networks BDX8’s actual Watts/10GbE port is actually lower; when fully populated with 756 10GbE or 192 40GbE ports. During the Lippis/Ixia test, the BDX8 was only populated to a third of its port capacity but equipped with power supplies, fans, management and switch fabric modules for full port density population. Therefore, when this full power capacity is divided across a fully populated BDX8, its WattsATIS per 10GbE Port will be lower than the measurement observed.
Company ProductWattsATIS/
port
3-Yr Energy Cost per
10GbE Port TEER
3 yr Energy Cost as a % of List Price Cooling
Alcatel-Lucent OmniSwitch 10K 13.3 $48.77 71 2.21% Front-to-Back
Arista 7504 10.3 $37.69 92 3.14% Front-to-Back
Juniper EX8216 21.7 $79.26 44 1.30% Side-to-Side*
Extreme BDX8 8.1 $29.60 117 1.79% Front-to-Back
(larger values are better)
Cost/WattsATIS/Year/10GbE
* 3rd party cabinet options from vendors such as Chatsworth support hot-aisle cold-aisle deployments of up to two EX8216 chassis in a single cabinet.
3-Year Energy Cost as a % of List Price
Fall 2010Lippis/Ixia Test
Spring 2011Lippis/Ixia Test
Fall 2011Lippis/Ixia Test
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
116 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The three year cost per 10GbE port shows the difference between the three core switches with 3-yr estimated cost per 10GbE ranging from a low of
$37.69 to a high of $79.26. In terms of cooling, Juniper’s EX8216 is the only Core switch that does not support front-to-back airflow. However, third
party cabinet options from vendors such as Chatsworth support hot-aisle cold-aisle deployments of up to two EX8216 chassis in a single cabinet.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
117 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Core Switch Industry Recommendations
The following provides a set of recommendations to IT business leaders and network architects for their consider-ation as they seek to design and build their private/public data center cloud network fabric. Most of the recommenda-tions are based upon our observations and analysis of the test data. For a few recommendations, we extrapolate from this baseline of test data to incorporate key trends and how these Core switches may be used in their support for corpo-rate advantage.
10GbE Core Switches Ready for Deployment: 10GbE Core switches are ready for prime time, delivering full line rate throughput at zero packet loss and nanosecond latencies plus single to double-digit delay variation. In addition, these Core switches offer low power consumption with energy cost over a three-year period estimated between 1.3% and 3.14% of acquisition cost.
Evaluate Each Core Switch Separately: While this set of Core switch category tested very well, there are differences between vendors especially in the RFC 2889 congestion, RFC 3918 IP Multicast, power consumption and cloud simulation test. Therefore, it is recommended to review each supplier’s results and make purchase decisions accordingly.
Deploy Core Switches that Demonstrate Efficient Power
and Cooling: In addition to solid TEER values and low WATTSATIS , all Core switches, except the EX8216, tested support front-to-back or rear-to-front cooling in support of data center hot/cold aisle airflow designs. Juniper does support front to back airflow via the purchase of a 3rd party adapter. Therefore, it is recommended that these Core switches can be used as part of an overall green data center initiative.
Ready for Storage Enablement: Most of the Core switches demonstrated the performance and latency required to sup-port storage enablement or converged I/O. In fact, all sup-pliers have invested in storage enablement such as support for CEE, FCoE, iSCSI, NAS, etc., and while these features were not tested in the Lippis/Ixia evaluation, these switches demonstrated that the raw capacity is built into the switches for its support. Look for more vendors to support flow control in their core switches.
Evaluate Core Switches for Network Services and Fabric
Differences: There are differences between suppliers in terms of the network services they offer as well as how their Core switches connect to ToR/Leaf switches to create a data center fabric. It is recommended that IT business leaders evaluate Core switches with ToR switches to assure that the network services and fabric attributes sought after are realized.
Connect Servers at 10GbE: The Lippis/Ixia test demon-strated the performance and power consumption advan-tages of 10GbE networking, which can be put to work and exploited for corporate advantage. For new server deploy-ments in private/public data center cloud networks, 10GbE is recommended as the primary network connectivity service as a network fabric exists to take full advantage of server I/O at 10GbE bandwidth and latency levels.
40GbE Uplink Termination Options Available: The Extreme X8 supports 192 40GbE ports, 24 were tested here. The OmniSwitch 10K, Juniper EX8216 and Arista 7504 pos-sess the hardware architecture and scale to support 40GbE and 100GbE modules. While only Extreme was tested for 40GbE, we expect that during the Spring 2012 Lippis Report test at iSimCity more will be aviable. Further, we expect 40GbE adoption will start in earnest during 2012 thanks to 40GbE cost being only 3-4 times that of 10GbE.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
118 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Consider a Two-Tier Network Architecture: Core switch port desnsity is on the rise with 10GbE port densities in the 128 to 756 port range plus 40GbE in 192 port density range. We tested Core switches with 256-10GbE and 24-40GbE and found that these products forward L2 and L3 packets at wire speed at nano and µs latency while offering excel-lent congestion management and are ready to be deployed in a two-tier network architecture made up of Core and ToR switches. This design is preferable for private/public data center cloud computing scale deployments. The Om-niSwitch 10K, Arista 7504, Extreme X8 and Juniper EX8216 possess the performance and power efficiency required to be deploy in a two-tier network architecture, reducing application latency plus capital and operational cost from these budgets.
Evaluate Link Aggregation Technologies: Not tested in the Lippis/Ixia evaluation was a Core switch’s ability to sup-port MC-LAG (Multi-Chassis Link Aggregation Group), ECMP (Equal-Cost Multi-Path routing , TRILL (Trans-parent Interconnection of Lots of Links) or SPB (802.1aq (Shortest Path Bridging), Cisco’s FastPath, Juniper’s QFab-ric and Brocade’s VCS. This is a critical test to determine Ethernet fabric scale. It is recommended that IT business leaders and their network architects evaluate each supplier’s MC-LAG, ECMP TRILL, SPB, FabricPath, QFabric, VCS approach, performance, congestion and latency character-istics.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
119 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
The Lippis Report Test Methodology
To test products, each supplier brought its engineers to configure its equipment for test. An Ixia test engineer was available to assist each supplier through test methodologies and review test data. After testing was concluded, each sup-plier’s engineer signed off on the resulting test data. We call the following set of testing conducted “The Lippis Test.” The test methodologies included:
Throughput Performance: Throughput, packet loss and delay for layer-2 (L2) unicast, layer-3 (L3) unicast and layer-3 multicast traffic was measured for packet sizes of 64, 128, 256, 512, 1024, 1280, 1518, 2176, 9216 bytes. In addi-tion, a special cloud computing simulation throughput test consisting of a mix of north-south plus east-west traffic was conducted. Ixia’s IxNetwork RFC 2544 Throughput/Latency quick test was used to perform all but the multicast tests. Ixia’s IxAutomate RFC 3918 Throughput No Drop Rate test was used for the multicast test.
Latency: Latency was measured for all the above packet sizes plus the special mix of north-south and east-west traffic blend. Two latency tests were conducted: 1) latency was measured as packets flow between two ports on differ-ent modules for modular switches, and 2) between far away ports (port pairing) for ToR switches to demonstrate latency consistency across the forwarding engine chip. Latency test port configuration was via port pairing across the entire device versus side-by-side. This meant that a switch with N ports, port 1 was paired with port (N/2)+1, port 2 with port (N/2)+2, etc. Ixia’s IxNetwork RFC 2544 Throughput / Latency quick test was used for validation.
Jitter: Jitter statistics was measured during the above throughput and latency test using Ixia’s IxNetwork RFC 2544 Throughput/Latency quick test.
Congestion Control Test: Ixia’s IxNetwork RFC 2889 Congestion test was used to test both L2 and L3 pack-ets. The objective of the Congestion Control Test is to deter-mine how a Device Under Test (DUT) handles congestion. Does the device implement congestion control and does
congestion on one port affect an uncongested port? This procedure determines if Head-of-Line (HOL) blocking and/or if back pressure are present. If there is frame loss at the uncongested port, HOL blocking is present. Therefore, the DUT cannot forward the amount of traffic to the congested port, and as a result, it is also losing frames destined to the uncongested port. If there is no frame loss on the congested port and the port receives more packets than the maximum offered load of 100%, then Back Pressure is present.
RFC 2544 Throughput/Latency Test
Test Objective: This test determines the processing overhead of the DUT required to forward frames and the maximum rate of receiving and forwarding frames without frame loss.
Test Methodology: The test starts by sending frames at a specified rate, usually the maximum theoretical rate of the port while frame loss is monitored. Frames are sent from and received at all ports on the DUT, and the transmission and reception rates are recorded. A binary, step or combo search algorithm is used to identify the maximum rate at which no frame loss is experienced.
To determine latency, frames are transmitted for a fixed duration. Frames are tagged once in each second and dur-ing half of the transmission duration, then tagged frames are transmitted. The receiving and transmitting timestamp on the tagged frames are compared. The difference between
Video feature: Click to view a discussion on the Lippis Report Test Methodology
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
120 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
the two timestamps is the latency. The test uses a one-to-one traffic mapping. For store and forward DUT switches latency is defined in RFC 1242 as the time interval starting when the last bit of the input frame reaches the input port and ending when the first bit of the output frame is seen on the output port. Thus latency is not dependent on link speed only, but processing time too.
Results: This test captures the following data: total num-ber of frames transmitted from all ports, total number of frames received on all ports, percentage of lost frames for each frame size plus latency, jitter, sequence errors and data integrity error.
The following graphic depicts the RFC 2554 throughput performance and latency test conducted at the iSimCity lab for each product.
RFC 2889 Congestion Control Test
Test Objective: The objective of the Congestion Control Test is to determine how a DUT handles congestion. Does the device implement congestion control and does conges-tion on one port affect an uncongested port? This procedure determines if HOL blocking and/or if back pressure are present. If there is frame loss at the uncongested port, HOL blocking is present. If the DUT cannot forward the amount of traffic to the congested port, and as a result, it is also los-ing frames destined to the uncongested port. If there is no frame loss on the congested port and the port receives more packets than the maximum offered load of 100%, then Back Pressure is present.
Test Methodology: If the ports are set to half duplex, collisions should be detected on the transmitting interfaces. If the ports are set to full duplex and flow control is enabled, flow control frames should be detected. This test consists of a multiple of four ports with the same MOL (Maximum Offered Load). The custom port group mapping is formed of two ports, A and B, transmitting to a third port C (the congested interface), while port A also transmits to port D (uncongested interface).
Test Results: This test captures the following data: in-tended load, offered load, number of transmitted frames, number of received frames, frame loss, number of collisions and number of flow control frames obtained for each frame size of each trial are captured and calculated.
The following graphic depicts the RFC 2889 Congestion Control test as conducted at the iSimCity lab for each product.
RFC 3918 IP Multicast Throughput No Drop Rate Test
Test Objective: This test determines the maximum throughput the DUT can support while receiving and transmitting multicast traffic. The input includes protocol parameters (IGMP, PIM), receiver parameters (group ad-dressing), source parameters (emulated PIM routers), frame sizes, initial line rate and search type.
Test Methodology: This test calculates the maximum DUT throughput for IP Multicast traffic using either a binary or a linear search, and to collect Latency and Data
Port D
Port C
Port B
Port A
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
121 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Integrity statistics. The test is patterned after the ATSS Throughput test; however this test uses multicast traffic. A one-to-many traffic mapping is used, with a minimum of two ports required.
If choosing OSPF or ISIS as IGP protocol routing, the trans-mit port first establishes an IGP routing protocol session and PIM session with the DUT. IGMP joins are then estab-lished for each group, on each receive port. Once protocol sessions are established, traffic begins to transmit into the DUT and a binary or linear search for maximum through-put begins.
If choosing “none” as IGP protocol routing, the transmit port does not emulate routers and does not export routes to virtual sources. The source addresses are the IP addresses configured on the Tx ports in data frame. Once the routes are configured, traffic begins to transmit into the DUT and a binary or linear search for maximum throughput begins.
Test Results: This test captures the following data: maxi-mum throughput per port, frame loss per multicast group, minimum/maximum/average latency per multicast group and data errors per port.The following graphic depicts the RFC 3918 IP Multicast Throughput No Drop Rate test as conducted at the iSimCity lab for each product.
Power Consumption Test
Port Power Consumption: Ixia’s IxGreen within the IxAutomate test suite was used to test power consumption at the port level under various loads or line rates.
Test Objective: This test determines the Energy Con-sumption Ratio (ECR), the ATIS (Alliance for Telecom-munications Industry Solutions) Telecommunications Energy Efficiency Ratio (TEER) during a L2/L3 forwarding performance. TEER is a measure of network-element ef-ficiency quantifying a network component’s ratio of “work performed” to energy consumed.
Test Methodology: This test performs a calibration test to determine the no loss throughput of the DUT. Once the maximum throughput is determined, the test runs in auto-matic or manual mode to determine the L2/L3 forwarding performance while concurrently making power, current and voltage readings from the power device. Upon completion of the test, the data plane performance and Green (ECR and TEER) measurements are calculated. Engineers followed the methodology prescribed by two ATIS standards documents:
ATIS-0600015.03.2009: Energy Efficiency for Telecom-munication Equipment: Methodology for Measuring and Reporting for Router and Ethernet Switch Products, and
ATIS-0600015.2009: Energy Efficiency for Telecommunica-tion Equipment: Methodology for Measuring and Report-ing - General Requirements
The power consumption of each product was measured at various load points: idle 0%, 30% and 100%. The final power consumption was reported as a weighted average calculated using the formula:
WATIS = 0.1*(Power draw at 0% load) + 0.8*(Power draw at 30% load) + 0.1*(Power draw at 100% load).
All measurements were taken over a period of 60 seconds at each load level, and repeated three times to ensure result repeatability. The final WATIS results were reported as a weighted average divided by the total number of ports per switch to derive at a WATTS per port measured per ATIS methodology and labeled here as WATTSATIS.
Test Results: The L2/L3 performance results include a measurement of WATIS and the DUT TEER value. Note that a larger TEER value is better as it represents more work done at less energy consumption. In the graphics
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
122 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
throughout this report, we use WATTSATIS to identify ATIS power consumption measurement on a per port basis.
With the WATTSATIS we calculate a three-year energy cost based upon the following formula.
Cost/WattsATIS/3-Year = ( WATTSATIS /1000)*(3*365*24)*(0.1046)*(1.33), where WATTSATIS = ATIS weighted average power in Watts
3*365*24 = 3 years @ 365 days/yr @ 24 hrs/day
0.1046 = U.S. average retail cost (in US$) of commercial grade power as of June 2010 as per Dept. of Energy Electric Power Monthly
(http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html)
1.33 = Factor to account for power costs plus cooling costs @ 33% of power costs.
The following graphic depicts the per port power consump-tion test as conducted at the iSimCity lab for each product.
Public Cloud Simulation Test
Test Objective: This test determines the traffic delivery performance of the DUT in forwarding a variety of north-south and east-west traffic in cloud computing applications. The input parameters include traffic types, traffic rate, frame sizes, offered traffic behavior and traffic mesh.
Test Methodology: This test measures the throughput, latency, jitter and loss on a per application traffic type basis
across M sets of 8-port topologies. M is an integer and is proportional to the number of ports the DUT is populated with. This test includes a mix of north-south traffic and east-west traffic, and each traffic type is configured for the following parameters: frame rate, frame size distribution, of-fered traffic load and traffic mesh. The following traffic types are used: web (HTTP), database-server, server-database, iSCSI storage-server, iSCSI server-storage, client-server plus server-client. The north-south client-server traffic simulates Internet browsing, the database traffic simulates server-server lookup and data retrieval, while the storage traffic simulates IP-based storage requests and retrieval. When all traffic is transmitted, the throughput, latency, jitter and loss performance are measured on a per traffic type basis.
Test Results: This test captures the following data: maxi-mum throughput per traffic type, frame loss per traffic type, minimum/maximum/average latency per traffic type, mini-mum/maximum/average jitter per traffic type, data integrity errors per port and CRC errors per port. For this report we show average latency on a per traffic basis at zero frame loss.
The following graphic depicts the Cloud Simulation test as conducted at the iSimCity lab for each product.
40GbE Testing: For the test plan above, 24-40GbE ports were available for those DUT that support 40GbE uplinks or modules. During this Lippis/Ixia test 40GbE testing in-cluded latency, throughput congestion, IP Multicast, cloud simulation and power consumption test. ToR switches with 4-40GbE uplinks are supported in this test.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
123 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Virtualization Scale Calculation
We observe technical specifications of MAC address size, /32 IP host route table size and ARP entry size to calculate the largest number of VMs supported in a layer 2 domain. Each VM will consume a switch’s logical resources of, espe-cially a modular/core switch, (1) MAC entry, (1) /32 IP host route, and (1) ARP entry. Further, each physical server may use (2) MAC/IP/ARP entries, one for management, one for IP storage and/or other uses. Therefore, the lowest com-mon denominator of the three (MAC/IP/ARP) entries will determine the total number of VM and physical machines that can reside within a layer 2 domain.
If a data center switch has a 128K MAC table size, 32K IP host routes, and 8K ARP table, then it could support 8,000 VM (Virtual Machines) + Physical (Phy) servers. From here we utilize a VM:Phy consolidation ratio to determine the approximate maximum # of physical servers. 30:1 is a typical consolidation ratio today but more dense (12) core proces-sors entering the market may increase this ratio to 60:1. With 8K total IP endpoints at a 30:1 VM:Phy ratio, this cal-culates to an approximate 250 Phy servers, and 7,750 VMs.
250 physical servers can be networked into approximately 12 48-port ToR switches, assuming each server connects to (2) ToR swiches for redundancy. If each ToR has 8x10G
links to each core switch, that’s 96 10GbE ports consumed on the core switch. So we can begin to see how the table size scalability of the switch determines how many of it’s 10GbE ports, for example, can be use and therefore how large a cluster can be built. This calculation assumes the core is the Layer 2/3 boundary, providing a layer 2 infra-structure for the 7,750 VMs.
More generally, for a Layer 2 switch positioned at ToR for example, the specification that matters for virtualization scale is the MAC table size. So if an IT leader purchases a layer 2 ToR switch, say switch A, and its MAC table size is 32K, then switch A can be positioned into a layer 2 domain supporting up to ~32K VMs. For a Layer 3 core switch (aggregating ToRs), the specifications that matter are MAC table size, ARP table size, IP host route table size. The smaller of the three tables is the important and limit-ing virtualization scale number. If an IT leader purchases a layer 3 core switch, say switch B and its IP host route table size is 8K, and MAC table size is 128K, then this switch can support a VM scale of ~8K. If an IT architect deploys the Layer 2 ToR switch A supporting 32K MAC addresses and connects it to a core switch B, then the entire layer 2 virtu-alization domain scales to the lowest common denominator in the network, the 8K from switch B.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
124 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
Terms of Use
This document is provided to help you understand whether a given product, technology or service merits additional investigation for your particular needs. Any decision to purchase a product must be based on your own assessment of suitability based on your needs. The document should never be used as a substitute for advice from a qualified IT or business professional. This evaluation was focused on illustrating specific features and/or performance of the product(s) and was conducted under controlled, laboratory conditions. Certain tests may have been tailored to reflect performance under ideal conditions; performance may vary under real-world conditions. Users should run tests based on their own real-world scenarios to validate performance for their own networks.
Reasonable efforts were made to ensure the accuracy of the data contained herein but errors and/or oversights can occur. The test/ audit documented herein may also rely on various test tools the accuracy of which is beyond our control. Furthermore, the document relies on certain representations by the vendors that are beyond our control to verify. Among these is that the software/ hardware tested is production or production track and is, or will be, avail-able in equivalent or better form to commercial customers. Accordingly, this document is provided “as is,” and Lippis Enterprises, Inc. (Lippis), gives no warranty, representation or undertaking, whether express or implied, and accepts no legal responsibility, whether direct or indirect, for the accu-racy, completeness, usefulness or suitability of any informa-tion contained herein.
By reviewing this document, you agree that your use of any information contained herein is at your own risk, and you accept all risks and responsibility for losses, damages, costs and other consequences resulting directly or indirectly from any information or material available on it. Lippis Enter-prises, Inc. is not responsible for, and you agree to hold Lip-pis Enterprises, Inc. and its related affiliates harmless from any loss, harm, injury or damage resulting from or arising out of your use of or reliance on any of the information provided herein.
Lippis Enterprises, Inc. makes no claim as to whether any product or company described herein is suitable for in-vestment. You should obtain your own independent pro-fessional advice, whether legal, accounting or otherwise, before proceeding with any investment or project related to any information, products or companies described herein. When foreign translations exist, the English document is considered authoritative. To assure accuracy, only use docu-ments downloaded directly from www.lippisreport.com .
No part of any document may be reproduced, in whole or in part, without the specific written permission of Lip-pis Enterprises, Inc. All trademarks used in the document are owned by their respective owners. You agree not to use any trademark in or as the whole or part of your own trademarks in connection with any activities, products or services which are not ours, or in a manner which may be confusing, misleading or deceptive or in a manner that dis-parages us or our information, projects or developments.
Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches
125 © Lippis Enterprises, Inc. 2011 Evaluation conducted at Ixia’s iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com
About Nick Lippis
Nicholas J. Lippis III is a world-renowned authority on advanced IP networks, communications and their benefits to business objectives. He is the publisher of the Lippis Report, a resource for network and IT business decision makers to which over 35,000 executive IT business leaders subscribe. Its Lippis Report podcasts have been downloaded over 160,000 times; i-Tunes reports that listeners also download the Wall Street Journal’s Money Matters, Business Week’s Climbing the Ladder, The Economist and The Harvard Business Review’s IdeaCast. Mr.
Lippis is currently working with clients to design their private and public virtualized data center cloud computing network architectures to reap maximum business value and outcome.
He has advised numerous Global 2000 firms on network architecture, design, implementation, vendor selection and budgeting, with clients including Barclays Bank, Eastman Kodak Company, Federal Deposit Insurance Corporation (FDIC), Hughes Aerospace, Liberty Mutual, Schering-Plough, Camp Dresser McKee, the state of Alaska, Microsoft, Kaiser Permanente, Sprint, Worldcom, Cigitel, Cisco Systems, Hewlett Packet, IBM, Avaya and many others. He works exclusively with CIOs and their direct reports. Mr. Lippis possesses a unique perspective of market forces and trends occurring within the computer networking industry derived from his experience with both supply and demand side clients.
Mr. Lippis received the prestigious Boston University College of Engineering Alumni award for advancing the profession. He has been named one of the top 40 most powerful and influential people in the networking industry by Network World. TechTarget an industry on-line publication has named him a network design guru while Network Computing Magazine has called him a star IT guru.
Mr. Lippis founded Strategic Networks Consulting, Inc., a well-respected and influential computer networking industry-consulting concern, which was purchased by Softbank/Ziff-Davis in 1996. He is a frequent keynote speaker at industry events and is widely quoted in the business and industry press. He serves on the Dean of Boston University’s College of Engineering Board of Advisors as well as many start-up venture firm’s advisory boards. He delivered the commencement speech to Boston University College of Engineering graduates in 2007. Mr. Lippis received his Bachelor of Science in Electrical Engineering and his Master of Science in Systems Engineering from Boston University. His Masters’ thesis work included selected technical courses and advisors from Massachusetts Institute of Technology on optical communications and computing.