FCoE iSCSI

download FCoE iSCSI

of 27

Transcript of FCoE iSCSI

  • 8/3/2019 FCoE iSCSI

    1/27

    There are number of discussions, blogs, and articles comparing Internet SCSI (iSCSI), Fibre Channel over

    Ethernet (FCoE), and Fibre Channel (FC). Many of them share a common belief that FCoE and FC are

    better suited as core data center storage area networks (SANs) and that iSCSI is ideal for Tier 2 storage

    or for SAN deployments in remote or branch office (ROBO) and small and medium business (SMB)

    environments. That is because iSCSI is characterized as low-performing, lousy, and unpredictable.

    In this blog I will tackle the misinformation around iSCSI performance as compared to FC and FCoE. I will

    also compare effective efficiency of the various SAN protocols since efficiency is an aspect of

    performance.

    Both iSCSI and FCoE share the same 10 Gigabit Ethernet (10GbE) at the transport layer. However, the

    perception is that the TCP/IP overhead makes iSCSI inefficient compared to FCoE and FC (both having

    better payload to packet-size ratio), thus leading to lower performance and efficiency. Figure 1 shows

    protocol efficiency calculation for iSCSI (both 1.5K MTU and 9K MTU), FC, and FCoE (2.5K MTU). It can be

    seen that when jumbo frames are enabled, iSCSI has the best protocol efficiency.

    Figure 1: Protocol efficiency comparisons

    Regarding performance, iSCSI having low performance might have been true when 1 Gbps was the

    maximum throughput available per iSCSI port (whereas FC was delivering 2 Gbps, 4 Gbps, and 8 Gbps

    per port), but with the availability of 10GbE, the commonly held belief that iSCSI performance is not up

    to par compared to FCoE or FC is no longer true.

    The Office of the CTO at Dell conducted a series of performance tests to compare 10GbE iSCSI, FCoE,and 4 Gb FC. To ensure similar workloads, the application throughput was limited to 4 Gb. The host bus

    adapters (HBAs) used for the different protocols were as follows: a 10GbE network interface card (NIC)

    with iSCSI offload for iSCSI traffic; a 10GbE converged network adapter (CNA) for FCoE traffic; and a 4

    Gbps FC HBA for Fibre Channel traffic.

    The goal of the testing was to capture achieved throughput and CPU utilization for a given SAN protocol.

  • 8/3/2019 FCoE iSCSI

    2/27

    The protocol efficiency comparisons from Figure 1 might be theoretical in nature; Figure 2 shows results

    from an I/O workload study comparing throughput of 10GbE iSCSI, FCoE, and 4 Gb FC HBAs. To keep the

    results easy to visualize, the results show the throughput achieved when the application generated 4 Gb

    throughput. It can be seen clearly that iSCSI outperforms FCoE and FC regardless of read or write

    operations for various I/O block sizes.

    Figure 2: Throughput performance comparisons (MB/s)

    Along with capturing the throughput, lets examine the host CPU utilization to better assess the

    performance and efficiency of specific SAN protocols. All the host adapters are comprised of hardware-

    based offload capability to process the protocol-specific traffic, minimizing use of CPU resources. Figure

    3 shows the effective CPU utilization for various workloads. It can be seen from this figure that all the

    host adapters have similar CPU utilization metrics, again reinforcing the fact that iSCSI is as efficient as

    FCoE and FC.

    Finally, Figure 4 shows throughput efficiency, defined as MBps/%CPU, for the various storage protocols.

    The chart shows 10GbE iSCSI having the best throughput efficiency across the workload types, clearly

    outperforming FCoE and FC.

    From the test results we can undoubtedly summarize that iSCSI as a SAN protocol is not lower-

    performing or inefficient compared to FC or FCoE. On the contrary, iSCSI outperforms both FC and

    FCoE. Customers who are planning to purchase storage for their data centers can consider an iSCSI SAN

    as a viable option, knowing iSCSI performance is at par or even better than FCoE and FC. Also, customers

    considering unifying their data center networks over Ethernet can start doing so now with iSCSI. While

    FCoE can also deliver storage traffic over Ethernet, it is still under development and is not ready for

    prime time.

  • 8/3/2019 FCoE iSCSI

    3/27

    Figure 3: CPU utilization (%) for iSCSI offload, FCoE, and FC

    Figure 4: Overall protocol throughput efficiency (MBps/%CPU) for iSCSI offload, FCoE, and FC

    http://en.community.dell.com/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/dell_5F00_tech_5F00_center.metablogapi/1385.image19.pnghttp://en.community.dell.com/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/dell_5F00_tech_5F00_center.metablogapi/7242.image12.pnghttp://en.community.dell.com/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/dell_5F00_tech_5F00_center.metablogapi/1385.image19.pnghttp://en.community.dell.com/cfs-file.ashx/__key/CommunityServer.Blogs.Components.WeblogFiles/dell_5F00_tech_5F00_center.metablogapi/7242.image12.png
  • 8/3/2019 FCoE iSCSI

    4/27

    Defining The Terms

    I want to try to avoid the yeah, but orfanboi comments from the outset. First, I understandFCoE much, much better than I understand iSCSI. So, there may be some specifics or details thatI may be missing and I highly encourage corrections or additions. My motive here is to examine

    the technologies in as detached and unbiased as possible to get to the true performance numbers.

    Also, Im looking here at the question ofperformance. By itself performance is a pandoras box

    of it depends, and I understand and accept that burden from the get-go. Performance, likeprice, must be handled as a purchase criterion in context, so Im not suggesting that any

    recommendations be made solely upon any one element over another.

    Having said that, what exactly are the performance concerns we should have with iSCSI vs.FCoE?

    The Nitty Gritty

    At first glance, it appears that FCoE provides a more efficient encapsulation method usingstandard transmission units. There is no need to travel as far up and down the OSI layer stack, forexample, which means that there is less processing required on either end of a point-to-pointnetwork for dealing with additional headers.

    If youre new to this, think of it this way: You have a letter you want to send to Santa Claus. You

    write your letter and place it in an envelope and then drop it in the mail. That letter then arrivesat the North Pole (if you addressed it properly) and Santas helpers open the letter and hand it to

    him. Thats the FCoE metaphor. (Actually,heres a much better and visually appealingdescription).

    How many layers?

    The TCP/IP metaphor (with respect to layers) means that you have to take that letter to SantaClaus and then place it into a larger envelope, and then put that larger envelope into a box beforesending it on its way. The extra layers of packing and unpacking takes time and processingpower.

    http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://jmichelmetz.files.wordpress.com/2010/03/russian-dolls.jpghttp://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/http://nickapedia.com/2010/03/19/fcoe-multi-hop-why-wait/
  • 8/3/2019 FCoE iSCSI

    5/27

    iSCSI requires more packing and unpacking in order to get to the letter, the argument goes, soover time that would mean that Santa wouldin theorybe able to open fewer letters in thesame amount of time.

    There is evidence to suggest that this conventional wisdom may be misleading, however. There

    are a lot of factors that can affect performance to the degree that a properly-tuned iSCSI systemcan outperform an improperly configured FC system.

    In fact, an iSCSI storage system can actually outperform a FC-based product depending on moreimportant factors than bandwidth, including the number of processors, host ports, cache memoryand disk drives and how wide they can be striped.(Inverted.com).

    Ujjwal Rajbhandari from Dell wrote ablog piececomparing the performance between iSCSI,FCoE and FC in which he found that iSCSIs efficiency can be profound, especially when

    enabling jumbo frames.

    Dells measurements are somewhat difficult to place in context, however. While the article waswritten in late October, 2009, only 4Gb throughput was used even though FCoE cards running atline speed had been available for more than half a year. (Also, the graphs are difficult to turn intomeaning as well: one of the graphs included doesnt really make much sense at all, in fact, as it

    appears that CPU utilization is a continuum from reading to writing rather than a categorizationof activities.)

    It seems to me that the whole point of understanding protocol efficiencies become salient as thespeeds increase. The immediate question I have is that if Dell points out that iSCSI efficienciesat 1GbE are inappropriate when compared to faster FC speeds, why would Dell compare slowerFC speeds and efficiencies to 10 Gb iSCSI?

    For instance, when moving from 4Gb to 8Gb HBAs, even within a pure 4Gb switchingenvironment using 4Gb storage, the overall throughput and bandwidth efficiency can increasesignificantly due to the improved credit handling.

    Nevertheless, there is plenty of evidence to suggest that iSCSI performance is impressive. InFebruary Frank Berry wrote an articleabout how Intel and Microsoft are tweaking iSCSI forenterprise applications, improving CPU efficiency as well as blasting through someveryimpressive IOPS numbers. Steven Foskett has a very interesting article on onhow it was doneand rightfully asks the more important question,can your storage handle the truth?

    Now, its very easy to get sidetracked as far as looking at other aspects of a FCoE/iSCSI decisiontree. Yeah, but becomes very compelling to say, but for our purposes here were going tostick with the performance question.

    How much performance is enough?

    http://invurted.com/iscsi-vs-fibre-channel-storage-performance/http://invurted.com/iscsi-vs-fibre-channel-storage-performance/http://invurted.com/iscsi-vs-fibre-channel-storage-performance/http://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FChttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FChttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FChttp://www.networkcomputing.com/data-networking-management/intel-and-microsoft-overhaul-iscsi-for-the-enterprise.phphttp://www.networkcomputing.com/data-networking-management/intel-and-microsoft-overhaul-iscsi-for-the-enterprise.phphttp://www.physorg.com/news183210810.htmlhttp://www.physorg.com/news183210810.htmlhttp://www.physorg.com/news183210810.htmlhttp://www.physorg.com/news183210810.htmlhttp://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+StephenFoskettPackRat+%28Stephen+Foskett%2C+Pack+Rat%29http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+StephenFoskettPackRat+%28Stephen+Foskett%2C+Pack+Rat%29http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+StephenFoskettPackRat+%28Stephen+Foskett%2C+Pack+Rat%29http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+StephenFoskettPackRat+%28Stephen+Foskett%2C+Pack+Rat%29http://www.physorg.com/news183210810.htmlhttp://www.physorg.com/news183210810.htmlhttp://www.networkcomputing.com/data-networking-management/intel-and-microsoft-overhaul-iscsi-for-the-enterprise.phphttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FChttp://invurted.com/iscsi-vs-fibre-channel-storage-performance/
  • 8/3/2019 FCoE iSCSI

    6/27

    Ultimately the question involves the criteria for data center deployment. How much bandwidthand throughput does your data center need? Are you currently getting 4 GB/s of storagebandwidth in your existing infrastructure?

    There is more to SAN metrics than IOPS, of course; you need to take it hand-in-hand with

    latency (which is where the efficiency question comes into play). Additionally, there is thequestion of how well-tuned iSCSI target drivers have been written.

    So, obviously iSCSI performance can be highly tuned to deliver jaw-dropping performance whengiven the right circumstances. The question that comes to mind, then is

    How does performance scale?

    iSCSI best practices require a completely separate iSCSI VLAN or network, which help withdedicating the bandwidth for SAN traffic. Nevertheless, whats not clear is what happens to the

    performance at larger scales:

    What happens with boot-from-SAN (e.g., PXE) environments? What is the theoretical maximum node count? What is the practical maximum node count? What is the effect of in-flight security (e.g., encryption) upon performance? What is the

    threshold for performance degradation? How does scaling affect the performance of the IQN server/management? Where is the retransmission threshold for congestion and what is the impact on the

    performance curve?

    This is where my limited experience with iSCSI is likely to get me into trouble. Im having a

    hard time finding the answers to those questions as it relates to 10Gb iSCSI, so Im open to inputand clarification.

    Bottom line.

    Even with these additional questions that arise regarding issues that affect performance, its clear

    that iSCSI does have the performance capability for data center storage traffic. There are otherconsiderations, of course, and Ill be addressing them over time. Nevertheless, I think its quite

    clear that all things being equal (and yes, I know, they never are), iSCSI can easily put up thenumbers to rival FCoE.

    Why FCoE? Why not just NAS and iSCSI?

    Scott Lowe recently wrote a good post on FCoE, and his thoughtshere. The comments of hisreaders are comments Ive heard from others as well, so I posted a response in the comments, butI think Scott and I dont have the same readership (and perhaps those that do may not read the

    comments)

    http://blog.scottlowe.org/2009/06/30/thinking-out-loud-why-deploy-fcoe/http://blog.scottlowe.org/2009/06/30/thinking-out-loud-why-deploy-fcoe/http://blog.scottlowe.org/2009/06/30/thinking-out-loud-why-deploy-fcoe/http://blog.scottlowe.org/2009/06/30/thinking-out-loud-why-deploy-fcoe/
  • 8/3/2019 FCoE iSCSI

    7/27

    This is an important dialog, IMHO, and I thought the my response was worth posting, as Ive

    gotten loads of questions like this also.

    If youre interested in this thread suggest reading Scotts posts and the comments. If you wantto see my take, read on.

    (from my comment on Scotts blog post)

    Guys The multi-hop thing is a bit old news - I did a big post on this when the FCoE spec wasdone (June 3rd)

    http://virtualgeek.typepad.com/virtual_geek/2009/06/fcoe-ratified.html

    This covers it in gory detail.

    The specific issue is that pre standard initiators and targets were missing something called FIP

    (FCoE initialization protocol). The gen 1 HBAs from Qlogic and Emulex were really more forearly interop, plugfests, and development, and I believe (I know this for a fact for the Qlogic8000 series - and I would fully expect the same from Emulex) are not software upgradable to theFC-BB-5 standard that includes FIP.

    BTW - we caught flack at EMC for not natively supporting FCoE earlier on the array targets, butthis was why - the standard simply wasnt ready. It was ready for host-FCoE switch-FC switch-FC target. Now, its getting ready for array targets. Personally, thats why I disagreed with the

    approach of taking the QLE8000 series card (with custom pre-FIP standard elements), puttinginto a FAS head, and calling that a solution. While that was going on (and making marketingnoise -but frankly a move that doesnt help the customer, because now they have a FAS head

    that needs a heavy hardware maintenance window to do a PCIe card upgrade), we were busydoing interop and working on the standard at the standard body (look at the meeting minutes -they are all public).

    Were now, of course, developing an ultraflex IO module for FCoE, which are hot-swappable.

    But back to the larger question - why FCoE? People who know me, Im a SUPER fan of NASand iSCSI, and naturally am biased in that direction, but as Ive worked with more and more

    customers, I have a growing understanding of the why.

    NFS is great and iSCSI are great, but theres no getting away from the fact that they depend on

    TCP retransmission mechanics (and in the case of NFS, potentially even higher in the protocolstack if you use it over UDP - though this is not supported in VMware environments today).because of the intrinsic model of the protocol stack, the higher you go, the longer the latencies invarious operations. One example (and its only one) - this means always seconds, and normallymany tens of seconds for state/loss of connection (assuming that the target fails over instantly,which is not the case of most NAS devices). Doing it in shorter timeframes would be BAD, as inthis case the target is an IP, and for an IP address to be non-reachable for seconds - is NORMAL.

    http://virtualgeek.typepad.com/virtual_geek/2009/06/fcoe-ratified.htmlhttp://virtualgeek.typepad.com/virtual_geek/2009/06/fcoe-ratified.htmlhttp://virtualgeek.typepad.com/virtual_geek/2009/06/fcoe-ratified.html
  • 8/3/2019 FCoE iSCSI

    8/27

    Theres also the question of the fact that anything dependent on TCP/IP also will have scenarioesthat depend on ARPs, which can take time.

    This isnt a secret. Look at the Netapp TR-3428 (and upcoming TR-3749) andEMC H6337docswhich spell the timeouts for NFS datastores on FAS and Celerra platforms respectively - which

    are in many tens of seconds (refer to the latestcurrently it adds up to 125 seconds), and foriSCSIif you read the VMware guides, the recommendation is 60 seconds.

    FCoE expects most transmission loss handling to be done at the Ethernet layer, via 802.1Qbb(STILL NOT A STANDARD!) for lossless congestion handling and legacy CRC mechanismsfor line errors. This means milliseconds - and in fact in many cases microseconds of link statesensitivity.

    Also, whereas we are seeing 30x performance increases for solid state disk on devices withoutfilesystems, we see 4-6x in cases where they support a filesystem. That doesnt mean filesystems(or NAS devices are bad), but highlights that one answer isnt the answer all the time, for all

    workloads, all SLAs, and all use cases.

    These ARE NOT showstoppers for many, many (most?) applications, and many, many use cases,but they are for some - and often, those are for applications with hyper-stringent SLAs - but wewant to virtualize everything, ever application possible, right?

    All FCoE adapters and switches can also be used for iSCSI and NAS, so dont think of it as an

    either/or, but an and. It means that it is possible to whittle the use cases that cant use anethernet storage transport to near zero (its not zero, because there will always be mainframes

    and whatnot). The ultimate point on this (this being the point that its not an FC HBA, but

    rather a NIC feature) is that Intel has commited to supporting the final result of 802.1Qbb and

    then doing a software initiator - at that point, FCoE support will just be an attribute of everycommodity NIC and switch on the market. Everyone in the FC HBA/switch market is rushing toit not because they want proprietary, but rather because were reaching the inflection point whereif youre not doing this, youre going to be out of business (maybe not today, but at a relativelynear tomorrow).

    The FCoE idea important (again as a NIC/switch feature, because it means that convergence(wire once, use for LAN/NAS/iSCSI/FCoE) is then applicable to a broader market, which onlyaccelerates the broader use of ethernet storage, which many people (me included) want to seecome sooner rather than later.

    Theres a far lesser IT value proposition of maintaining and integrating with exisitng tools and

    processes. I only say lesser because frankly, if theres a better way, it can over time change a

    process.

    Remember - this is coming from someone who:

    a) loves NASb) loves iSCSI (came from iSCSI startup)

    http://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdfhttp://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdfhttp://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdfhttp://www.emc.com/collateral/hardware/white-papers/h6337-introduction-using-celerra-vmware-vsphere-wp.pdf
  • 8/3/2019 FCoE iSCSI

    9/27

    c) works for a storage company that is in the NAS, iSCSI, FC, and FCoE (and heck, COS andCAS as well) business - we just do what our customers tell us they need.

    At least in my personal experience, our customers are asking for FCoE for those reasons

    Continuing the FCoE Discussion

    Tuesday, December 9, 2008 inStoragebyslowe|17 comments

    A few weeks ago I examined FCoE in the context of its description as an I/O virtualization

    technology in my discussion ofFCoE versus MR-IOV. (Despite protestations otherwise, Illcontinue to maintain that FCoE is not an I/O virtualization technology.)

    Since that time, I read a few more posts about FCoE in various spots on the Internet:

    Is FCoE a viable option for SMB/Commercial?Is the FCoE Starting Pistol Aimed at iSCSI?Reality Check: The FCoE Forecast

    Tonight, after reading a blog post by Dave Graham regardingFCoE vs. InfiniBand, I startedthinking about FCoE again, and I came up with a question I want to ask. Im not a storage expert,and I dont have decades of experience in the storage arena like many others that write aboutstorage. The question Im about to ask, then, may just be the uneducated ranting of a fool. If so,

    youre welcome to enlighten me in the comments.

    Heres the question: how is FCoE any better than iSCSI?

    Now, before your head explodes with unbelief at the horror that anyone could ask that question,let me frame that question with more questions. Note that these are mostly rhetorical questions,but if the underlying concepts behind these questions are incorrect you are, again, welcome toenlighten me in the comments. Here are the framing questions that support my primary questionabove:

    1. FCoE is always mentioned hand-in-hand with 10 Gigabit Ethernet. Cant iSCSI take advantage of10 Gigabit Ethernet too?

    2. FCoE is almost always mentioned in the same breath as low latency and lossless operation.Truth be told, its not FCoE thats providing that functionality, its CEE (Converged Enhanced

    Ethernet). Does that mean that FCoE without CEE would suffer from the same problems as

    iSCSI?

    3. If iSCSI was running on a CEE network, wouldnt it exhibit predictable latencies and losslessoperation like FCoE?

    These questionsand the thoughts behind themare not necessarily mine alone. In OctoberStephen Foskettwrote:

    http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/#commentshttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/#commentshttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/#commentshttp://blog.scottlowe.org/2008/11/17/fcoe-versus-mr-iovhuh/http://blog.scottlowe.org/2008/11/17/fcoe-versus-mr-iovhuh/http://blog.scottlowe.org/2008/11/17/fcoe-versus-mr-iovhuh/http://blog.flickerdown.com/2008/10/14/is-fcoe-a-viable-option-for-smbcommercial/http://blog.flickerdown.com/2008/10/14/is-fcoe-a-viable-option-for-smbcommercial/http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://blog.fosketts.net/2008/10/19/fcoe-reality/http://blog.fosketts.net/2008/10/19/fcoe-reality/http://flickerdown.com/?p=349http://flickerdown.com/?p=349http://flickerdown.com/?p=349http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://flickerdown.com/?p=349http://blog.fosketts.net/2008/10/19/fcoe-reality/http://blog.fosketts.net/2008/10/16/fcoe-versus-iscsi/http://blog.flickerdown.com/2008/10/14/is-fcoe-a-viable-option-for-smbcommercial/http://blog.scottlowe.org/2008/11/17/fcoe-versus-mr-iovhuh/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/#commentshttp://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/category/storage/
  • 8/3/2019 FCoE iSCSI

    10/27

    And iSCSI isnt done evolving. Folks like Mellor,Chuck Hollis, andStoragebodare laudingFCoE at 10 gigabit speeds, but seem to forget that iSCSI can run at that speed, too. It can alsorun on the same CNAs and enterprise switches.

    If those Converged Network Adapters (CNAs) and enterprise switches are creating the lossless

    CEE fabric, then iSCSI benefits as much as FCoE. Dante Malagrinoagreeson the Data CenterNetworks blog:

    I certainly agree that Data Center Ethernet (if properly implemented) is the real key differentiatorand enabler of Unified Fabric, whether we like to build it with iSCSI or FCoE.

    Seems to me that all the things that FCoE has going for it10 Gigabit speeds, lossless operation,low latency operationare equally applicable to iSCSI as they are functions of CEE and notFCoE itself. So, with that in mind, I bring myself again to the main question: how is FCoE anybetter than iSCSI?

    You might read this and say, Oh, hes an FCoE hater and an iSCSI lover. No, not really; it justdoesnt make any sense to me how FCoE is touted as so great and iSCSI is treated like the red-headed stepchild. I have nothing against FCoEjust dont say that its an enabler of the UnifiedFabric. (Its not. CEE is what enables the Unified Fabric.) Dont say that its an I/O virtualization

    technology. (Its not. Its just a new transport option for Fibre Channel Protocol.) Dont say that

    it will solve world hunger or bring about world peace. (It wont, although I wish it would!)

    Of course, despite all these facts, its looking more and more like FCoE is VHS and iSCSI isBetamax. Sometimes the best technology doesnt always win

    17 comments

    Comments feed for this article

    Trackback link:http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/trackback/

    1.Justin onTuesday, December 9, 2008 at 3:32 am

    Scott,Please forgive me if Im explaining stuff youve heard before and am insulting yourintelligence, but

    From my experience the big deal with FCoE vs. iSCSI is that one is working at Layer 2and one is working at Layer 3 (respectively). iSCSI is SCSI commands and data (1),encapsulated in TCP packets (2), sent using IP (3), over ethernet frames (4). FCoE is

    http://chucksblog.typepad.com/chucks_blog/2008/10/fcoe-gets-taken.htmlhttp://chucksblog.typepad.com/chucks_blog/2008/10/fcoe-gets-taken.htmlhttp://chucksblog.typepad.com/chucks_blog/2008/10/fcoe-gets-taken.htmlhttp://storagebod.typepad.com/storagebods_blog/2008/10/netapp-announce-support-for-fcoe.htmlhttp://storagebod.typepad.com/storagebods_blog/2008/10/netapp-announce-support-for-fcoe.htmlhttp://storagebod.typepad.com/storagebods_blog/2008/10/netapp-announce-support-for-fcoe.htmlhttp://blogs.cisco.com/datacenter/comments/fcoe_and_iscsi_who_cares_its_all_about_data_center_ethernet/http://blogs.cisco.com/datacenter/comments/fcoe_and_iscsi_who_cares_its_all_about_data_center_ethernet/http://blogs.cisco.com/datacenter/comments/fcoe_and_iscsi_who_cares_its_all_about_data_center_ethernet/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/feed/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/feed/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/trackback/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/trackback/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/trackback/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42648http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42648http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42648http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42648http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/trackback/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/feed/http://blogs.cisco.com/datacenter/comments/fcoe_and_iscsi_who_cares_its_all_about_data_center_ethernet/http://storagebod.typepad.com/storagebods_blog/2008/10/netapp-announce-support-for-fcoe.htmlhttp://chucksblog.typepad.com/chucks_blog/2008/10/fcoe-gets-taken.html
  • 8/3/2019 FCoE iSCSI

    11/27

    SCSI commands and data (1), encapsulated in FC frames (2), sent over ethernet frames(3). Its just inherently less overhead.

    FCoE is essentially SCSI over Ethernet, whereas iSCSI is SCSI over IP Similarly,there does exist FCoIP, which is comparable to iSCSI although its used for a completely

    different purpose (tunneling FC from site to site).

    So theres less work to do for FCoE, and this means less work to do either in the softwaredriver for it, or in the adapter. Also of note is that today ,a hardware iSCSI adapter willrun you about the same as a FC HBA, so theres not that much cost savings. The other

    important point is that FCoE will work with existing FC storage arrays with nomodification, which means you can start your unified fabric while still maintaining yourold FC infrastructure.

    I do think that 10Gbit iSCSI will be better than 1GB iSCSI, obviously, but FCoE will bebetter. There will be support for existing enterprise storage arrays in much greater

    abundance with FCoE than iSCSI, and your performance will be better straight off the bat(a 4GB FC array via FCoE will be way faster than a 1GB iSCSI array on 10GBe).

    That being said, iSCSI will always be routable whereas FCoE wont be but do youreally want to be routing your storage traffic?

    I think the market will eventually decide which technology will win, and just from theway I see it Im betting on FCoE due to its compatibility.

    Your thoughts?

    2.RodosonTuesday, December 9, 2008 at 5:12 am

    Great topic Scott. There is a thread on VMTN about this at the moment and it would begreat if people who are thinking about this would be able to contribute. Its athttp://communities.vmware.com/message/1119046

    Rodos

    3.RodosonTuesday, December 9, 2008 at 5:19 am

    FCoE is better than iSCSI when you need to integrate into existing FC fabrics that mayalready exist.

    http://rodos.haywood.org/http://rodos.haywood.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42649http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42649http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42649http://communities.vmware.com/message/1119046http://communities.vmware.com/message/1119046http://rodos.haywood.org/http://rodos.haywood.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42650http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42650http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42650http://rodos.haywood.org/http://rodos.haywood.org/http://rodos.haywood.org/http://rodos.haywood.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42650http://rodos.haywood.org/http://communities.vmware.com/message/1119046http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42649http://rodos.haywood.org/
  • 8/3/2019 FCoE iSCSI

    12/27

    Its a fruit comparison but not an apples to apples. One use case is if you have yourexisting FC storage fabric but want to bring in new equipment, you can use FCoE for thatat the access layer and then transport it over FC in the core to get to the existing SANs.Not many SANs natively support FCoE, but yes they do support iSCSI. However manydont allow FC and iSCSI either at the same time or to the same LUNs. But my point is

    more about transition and integration to existing environments.

    Just a thought as to one difference.

    4.GaryonTuesday, December 9, 2008 at 5:33 am

    I see where the confusion comes here. The main difference between iSCSI and FCoE isthat iSCSI uses the TCP/IP stack and FCoE runs directly on ethernet. Ethernet has noflow control, therefore the new converged switches need to add flow control to the traffic(bit of a wikipedia lookup answer, but the interpretation is import).

    Like you I am not a storage guru, but as far as the other main activities that are associatedwith FC, zoning and mapping, the new converged switches will need to perform thisprocess. iSCSI does not have these facilities, although they can be mimicked using accesslists. FCoE is not routable, unlike iSCSI, so all devices from initiator to target need to beconverged (FCoE aware) devices to create a fabric.

    The nice thing about iSCSI is that it can be implemented on an IP network. Its

    performance is going to be tied into the setup and any hardware used to provide the

    transport (doing iSCSI on hardware, rather than in software provides benefits as well astech like jumo frames). iSCSI will also inherit any issues with the IP network it isrunning on top of.

    FCoE is going to add an additional cost to the infrastructure at any site in which it isdeployed. Converged switches will not be cheap, and compatible HBAs will still berequired. Whether the HBA will also be converged is another question (can I run FCoEand TCP/IP over the same port?).

    The bit of knowledge I am missing is how converged are these devices? Once theinitiator connects to the converged switch, is the transport between these switches also

    converged (holding both TCP/IP networking and FCoE traffic), even further then if this isthe case then how do the storage admins feel about their storage path being poluted withother traffic?

    I would almost consider that FCoE only really provides a few benefits1. Brings FC to 10gig2. Reduces the number of deployed devices (network + fabric vs. converged)3. Changes the medium to a lower cost alternative and enables storage pathing over

    http://n/Ahttp://n/Ahttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42651http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42651http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42651http://n/Ahttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42651http://n/A
  • 8/3/2019 FCoE iSCSI

    13/27

    infrastructure that might be more readily available (gets rid of fibre runs and replaceswith copper)

    Im probably wrong about a lot of this stuff, so some clarification would help both myselfand other readers of Scotts (excellent) blog.

    Gary

    5.GaryonTuesday, December 9, 2008 at 6:24 am

    The difference between iSCSI and FCoE is mostly down to the transport. FCoE usesethernet as the transport and iSCSI uses TCP/IP. Ethernet needs to be extended to supportflow control (already part of the the TCP/IP standard).

    FCoE requires a converged device to perform the following1. Map MAC address to WWW2. Perform zoning/mapping/masking functions3. Create fabric between initiator and target

    The nice thing about iSCSI is that the network doesnt need to change, standard

    switching and routing can provide a route from initiator to target (routing being the keyword, as ethernet is not routable). iSCSI does not provide the zoning/mapping/maskingfunctions, but some of this can be achieved via access lists and clever VLAN tagging.

    FCoE also supports VLAN tagging, so logical separation of traffic is still possible (ratherthan the guaranteed physical separation fabric provies).

    Adapters will also be converged, so both TCP/IP and FCoE can use the same medium.This is were I think the standard helps some implementations, but hinders others. heres

    my thinking.

    If you converge the two transports then you need to have some kind of QOS in place toinsure the storage path is not interrupted. Storage admins like to know where thebottlenecks can exist in the transport (fabric) to identify throughput issues. Theconverged devices will need a management system for both QOS and throughput analysis

    to satisfy the needs of the storage admins and networking teams. Its great that FCoEreduces the number of devices, amount of cabling and power/cooling requirements but atthe same time it is bad that data paths are shared to the degree available as it can lead tobleed between the networking guys and the storage guys.

    I would expect that a lot of early implementations will still keep the storage andnetworking paths logically separated, i.e. a network card for TCP/IP traffic and a HBAfor FCoE, with separate trunks/paths all the way through the infrastructure (probably

    http://n/Ahttp://n/Ahttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42652http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42652http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42652http://n/Ahttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42652http://n/A
  • 8/3/2019 FCoE iSCSI

    14/27

    using VLAN tagging). Its the only way to guarantee to both networking and storage

    teams that their traffic has equal priority.

    I work with a relatively small setup (VMWare, blades and Netapp). Im not a storageguru. I currently utilise both iSCSI and FC in my environment. FCoE would not change

    much for me, but I can see the datacenter taking a big advantage. The standard isnt goingto be the issue, its the management of converged traffic that will bethe big one. Itssimilar to when voice/video came onto TCP/IP, suddenly there was traffic that neededpriority. Voice/Video is easier to manage as we know bandwidth requirements inadvance. Storage is generally not so uniform. 10GB will quicly become 5G data 5Gstorage, or similar weighing. At least this way we can guarantee the throughput to each ofthe different teams.

    Gary

    6.sloweonTuesday, December 9, 2008 at 7:34 am

    Justin,

    Ill grant you that FCoE has less overhead, theres no doubt about that. But I really have

    to question the compatibility of FCoE with existing FCPhow does having to use newadapters (CNAs) and new switches (Cisco Nexus or equivalent) equal compatibility?Sure, its still FC, but youll need a bridge to join FCoE and FCP fabrics. I think that a lot

    of people overlook the fact that new CNAs and new fabric switches are required.

    As for performance, of course a 4Gb FC array via FCoE over 10Gb Ethernet will bebetter than a 1Gb iSCSI array over 10Gb Ethernet. The bottleneck here is the array, notthe fabric or the transport.

    Youre betting FCoE for compatibility, but to me iSCSI seems much more compatible.

    Rodos,

    As I mentioned to Justin, it seems to me that well need an FCoE-to-FCP bridge to jointhe physical fabrics into a single logical fabric. This bridge will introduce less latency and

    overhead than an iSCSI-to-FCP bridge, but it will be an additional piece of equipmentthat will be required. FCoE does seem much more of a transitional technology thananything elsehence my comment about VHS vs. Betamax. VHS was not the best

    technology, but it won. Will we see the same with FCoE and iSCSI?

    7.

    http://blog.scottlowe.org/http://blog.scottlowe.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42655http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42655http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42655http://rogerlunditblog.blogspot.com/http://blog.scottlowe.org/http://rogerlunditblog.blogspot.com/http://blog.scottlowe.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42655http://blog.scottlowe.org/
  • 8/3/2019 FCoE iSCSI

    15/27

    Roger LundonTuesday, December 9, 2008 at 11:06 am

    slowe,

    I tend to agree with you, additionally, it is very easy to scale iSCSI, where you are stuck

    with one controller on a EMC / Netapp FC array, each iSCSI array ( EQL) has its owncontroller.

    Therefore, if you have three racks of EMC or Three Racks of EQL, ( 12 per rack ) thearray each have two controllers each ( or at least most that I have seen do ) where the ,

    EQL iSCSI, would have something like 36 controllers, vs three EMC controllers for thesame amount of storage.

    Now Even if you had 8 GB FC, wouldnt you be limited to 8GB X 4ports X 3 Controllers

    = 96 GB

    To make it even, lets say you had iSCSI sans with 10 GB controllers, 10 GB X 2port X36 Controllers = 720 GB

    Hence, if you had a Six top end switches @ 10 GB, Connected to 36 10 GB Sans all onthe same switch back plane, wouldnt the EQL have better throughput than FC Over

    Ethernet?

    8.StuonTuesday, December 9, 2008 at 11:07 am

    Scott,Sure there is new equipment (CNA, CNS), but from a management standpoint, the newservers get added to the existing FC fabric and can be zoned and given access just likethey were more FC nodesthis is the easy interoperability. There are plenty ofcustomers running hundreds or thousands of nodes of FC. For customers that dont havea large investment in FC, iSCSI is a good solution (and sure, it will be able to takeadvantage of 10GbE and CEE). iSCSI has done very well in the commercial/SMB space,but the ecosystem and tools for a large (hundreds of nodes) hasnt developed yet. 2 pathsto get customers to the 10GbE converged network.-Stu

    9.sloweonTuesday, December 9, 2008 at 12:24 pm

    Roger,

    http://rogerlunditblog.blogspot.com/http://rogerlunditblog.blogspot.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42661http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42661http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42661http://nohype.tumblr.com/http://nohype.tumblr.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42662http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42662http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42662http://blog.scottlowe.org/http://blog.scottlowe.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42664http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42664http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42664http://blog.scottlowe.org/http://nohype.tumblr.com/http://blog.scottlowe.org/http://nohype.tumblr.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42664http://blog.scottlowe.org/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42662http://nohype.tumblr.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42661http://rogerlunditblog.blogspot.com/
  • 8/3/2019 FCoE iSCSI

    16/27

    Thanks for your comment. Not all iSCSI arrays scale in exactly the same fashion, sosome of what you are discussing may be specific to Dell/EQL. In addition, not all iSCSIinitiators will scale overall throughput linearly with more links as well (think VMwareESXs software initiator, for example). In this regard, I will say that FC (and presumably

    FCoE) have the advantage.

    Stu,

    Easy interoperability between FC and FCoE I will grant. As you describe, the ability tomanage the FC and FCoE fabrics in much the same fashion, perhaps from the same tools,is quite compelling for large FC shops. But interoperability is not the same ascompatibility, and to say that FC is compatible with FCoE is, in my opinion, incorrect.Perhaps Im being a stickler, but if I cant plug an FC HBA into an FCoE fabric then

    theyre not compatible. Interoperable, yes, but not compatible.

    Thanks to both of you for your comments!

    10.Nate onTuesday, December 9, 2008 at 2:25 pm

    FCoE may be appealing to current FC users because they want interoperability, but Idont see it having significant value over iSCSI for anyone incoming to the storagemarket. FCoE may use ethernet, but that doesnt make it the easy plug into the network

    that iSCSI is. Particularly for SMBs and SMEs that may not have dedicated storage teamsthe ability to not need to learn an all new network is huge. A standard switch running

    standard configs is perfect for iSCSI, not so for FCoE. Routability is a big deal. Whenyou want to be able to replicate data between data centers or even across a WAN link itsnice to not have to take an extra step of tunneling or conversion. iSCSI maintains a leg upon cost as well because you dont need special switches. FCoE may get you to the same

    number of switches as iSCSI, but not necessarily the same commodity level of switches.HBAs are also not needed in many scenarios. If your servers arent heavily loaded (which

    most arent) they can easily handle the little bit of extra work to run a software initiator.

    The Microsoft iSCSI initiator is fantastic with great MPIO. Im biased because I was anearly iSCSI adopter (started buying Equallogic back when they were just Equallogic), butI dont see any value in FCoE other than giving FC clingers a platform from which to

    claim they are keeping up with the times. 10GB iSCSI would have meant the end of FC,

    so they had to jump the ethernet bandwagon.

    11.Roger LundonTuesday, December 9, 2008 at 3:19 pm

    http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42668http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42668http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42668http://rogerlunditblog.blogspot.com/http://rogerlunditblog.blogspot.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42669http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42669http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42669http://rogerlunditblog.blogspot.com/http://rogerlunditblog.blogspot.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42669http://rogerlunditblog.blogspot.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42668
  • 8/3/2019 FCoE iSCSI

    17/27

    slowe,

    Correct, and I think that really the largest bottle neck becomes the San and or Server /Servers.

    But it think that iSCSI is very flexible today, and will be more so in the future.

    12.Jose L. MedinaonTuesday, December 9, 2008 at 5:31 pm

    Scott:

    I agree with you: I cant see any reason to substitute iSCSI by FCoE. I think FCoE isanother strategy to assure new bussiness to storage & networking vendors. Personally, Iwas using iSCSI for years in ESX environment without any special knowlegde of PureStorage networking.. and its works good for me!. FCoE hide the manifest incapacity (or

    desire) of networking manufactures to improve ethernet with QoS capabilities (as anstorage network requires). Im sure iSCSI over serious datacenter ethernet can provide

    the same solutions of FCoE without the expesive knowlegde & management of a FCxx

    network.

    13.Trackback fromDave Graham's WeblogonTuesday, December 9, 2008 at 6:04 pm

    14.Dan McConnellonTuesday, December 9, 2008 at 11:38 pm

    Scott,Great question! always fun cutting through the spin of the day to get through to reality.Appreciate the post and your thoughts/insights as they do cut through the spin cycle.Apologize up front for the length of the postbut getting caught up on much of the great

    discussion in the thread. So, on to the questions:

    1. FCoE is always mentioned hand-in-hand with 10 Gigabit Ethernet. Cant iSCSI takeadvantage of 10 Gigabit Ethernet too?

    A->>In short Yes. iSCSI will function on both non-DCB enabled as well as DCBenabled 10Gb Ethernet. For those that dont need DCB or dont want to invest/replacetheir infrastructure with DCB enabled switching, iSCSI will run just fine on standard

    10Gb (or 1Gbps Ethernet for that matter unlike FCoE which requires 10Gbps Ethernet).For those that desire the DCB functionality, iSCSI will sit on top of a DCB enablednetwork and take full advantage of what DCB provides. (side note.. DCB-Data CenterBridging = CEE).

    http://bevirtual.blogspot.com/http://bevirtual.blogspot.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42672http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42672http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42672http://flickerdown.com/?p=356http://flickerdown.com/?p=356http://flickerdown.com/?p=356http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42673http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42673http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42673http://www.dell.com/storagehttp://www.dell.com/storagehttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42679http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42679http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42679http://www.dell.com/storagehttp://bevirtual.blogspot.com/http://www.dell.com/storagehttp://bevirtual.blogspot.com/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42679http://www.dell.com/storagehttp://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42673http://flickerdown.com/?p=356http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42672http://bevirtual.blogspot.com/
  • 8/3/2019 FCoE iSCSI

    18/27

    2. FCoE is almost always mentioned in the same breath as low latency and losslessoperation. Truth be told, its not FCoE thats providing that functionality, its CEE

    (Converged Enhanced Ethernet). Does that mean that FCoE without CEE would sufferfrom the same problems as iSCSI?A->> DCB enabled networking (NICs, switches, and storage arrays) is required for

    FCoE. FCoE will not work without it. The reason for this is the fact that FCoE itself doesnot include a mechanism for ensuring reliable delivery. It therefore requires thatfunctionality to exist in the network (ie flow control for Ethernet), which is what a DCBenabled network infrastructure is targeted to provide.iSCSI, on the other hand, has its own method for ensuring reliable transfer in the protocollayer (ie TCP). This enables iSCSI to run reliably on standard non-DCB enabled Ethernetswitches (or remotely for that matter)

    3. If iSCSI was running on a CEE network, wouldnt it exhibit predictable latencies and

    lossless operation like FCoE?A->>Yes

    Catching up on some of the interesting points/statements in the comments:Justin mentioned some additional work required for iSCSI. This additional work(ieTCP) is what ensures reliable delivery in non-DCB enabled networks. FCoE pushes thiswork into the network and is why it requires DCB enabled NICs, switches, and storagedevices. I would argue that for many typical workloads this additional processing is notnoticeable. But in either case, if it is a pain point, iSCSI HBAs are available that offloadthis additional work. With the iSCSI HBA, the host side processing is equivalent to FC orFCoE (all enter under a similar storage stack).I guess one way of looking at it is as follows: Both FCoE and iSCSI can leverageoptimized HBAs(DCB enabled FCoE CNAs or iSCSI offloaded HBAs) and DCBenabled switches to achieve similar performance but iSCSI also has the flexibility touse standard NICs with standard non-DCB networks.

    As far as Rodos point for fitting into existing FC frameworks. One question that comesto mind would be if those frameworks are integrating manageability for the Ethernetswitches/networks? I would guess that both FCoE and iSCSI are in the same boat here.

    Justin also brought up an interesting point that iSCSI is routable where FCoE wont be.This does have some interesting implications today with routings ability to enableremote mirroring/DR. I would also suspect that it may become an even more interestingdifferentiator with the growth of cloud computing.I guess Ill wind down with a tie to Nates point. FCoE might be appealing as a bridge

    back into existing Fibre Channel, but if the storage guys already have to swap out theirnetwork infrastructure toward ethernet, iSCSIs flexibility to solve both ends of the

    cost/performance question and the fact that it is already here would seem to give it a legup.

    -Dan

  • 8/3/2019 FCoE iSCSI

    19/27

    15.AneelonWednesday, December 10, 2008 at 12:58 am

    Im not a storage guy either, at all. 100% data networking background.

    For q2: FCoE without CEE is a non-thing. Practically, just consider FCoE short forFCoCEE. Getting FC into the E frame and getting lossless, nonblocking, PFC, etc.,capabilities into E were just steps in making FCo(C)E(E) a viable technology.

    And q3: As things stand today with the standards in progress, iSCSI would ride on thelower order priority queue in CEE and notget the same nonblocking, etc., that FCoE will.A software hack or specialized CNAs could change that, but none are being publiclydiscussed afaik.

    16.Jeff Asher onWednesday, December 10, 2008 at 5:13 am

    Let me start by saying that I am and have been for a long time an IP/ethernet proponentand regularly tell organizations not to invest in new FC infrastructure if they dont

    already have one. It just doesnt seem to make financial sense with the DataCenterEthernet Initiatives in play at the moment. However

    While the technology debates are fun, other aspects must be considered for thistechnology to be deployed and accepted. At most large organizations politics drivestechnology decisions at least as much as the merits of the technologies being considered.This is sad, but mostly true. The technologies being debated here actually intensify thepolitical debate.

    Fibre Channel over Fibre Channel (FCoFC?) solves a political problem.FCoE creates a political problem.

    FC switches and infrastructure are popular not only because of many of the benefits andtechnical superiority in some cases over legacy Ethernet, but because the storage group

    often got to own the switches and infrastructure rather than the network group. One groupowning the infrastructure from end-to-end had the benefit of own group being able tomanage all aspects of storage without dependence on another group. Service levels couldtheoretically improve and political empire builders were happy because they owned morestuff and possibly more people. Ive seen many 4Gbps FC deployments where iSCSI was

    more than adequate technically and the financial benefits were simply not debatablebecause the storage groups did not trust/like the network operations groups.

    http://joga.be/http://joga.be/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42681http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42681http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42681http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42684http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42684http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42684http://joga.be/http://joga.be/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42684http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42681http://joga.be/
  • 8/3/2019 FCoE iSCSI

    20/27

    FCoE throws a kink in things because the network operations groups are more likely toown the switches rather than the storage groups. This breaks the end-to-end model andtheoretically would drive down service levels because of the interfaces required betweenthe 2 operations groups (I actually believe service levels would increase in well runshops, but that is another debate).

    The problem is that while 10GB and DCE benefit both iSCSI and FCoE, they have thesame political problems that have slowed the adoption of iSCSI in large enterprises. If thestorage group doesnt get to own the infrastructure from end-to-end, they are going tostick to FC regardless of the benefits of doing something else. And no, Role BasedAccess Controls for management doesnt cut it in terms of the political problem.

    Is this view cynical? Probably, however it was developed from not just my ownexperience and those of many people Ive talked to at many customers, various

    manufacturers, and resellers.

    Again, I say all this despite living clearly in the Ethernet camp.

    17.Nate onMonday, December 15, 2008 at 10:14 am

    Jeff, that is a good point. I would venture also that the political reasoning hamperingiSCSI and FCoE in the large enterprise is what makes the two technologies moreappealing in the SMb and SME market. The smaller shops are less likely to have theluxury of dedicating teams of people to only storage, so they need crossover knowledge. I

    personally think iSCSI offers more accessible crossover knowledge due to the fact it canrun on any network.

    The one way around the political issue for the larger folks is still to run a seperatephysical network. Cost effective? No. Most efficient? No. Like you said in a well runshop the two teams working together should actually be better, but we all know in somecases theyll still want to run their own gear. Basically at that point iSCSI and FCoE justbecome enablers of 10GB rather than convergance. Thats sort of ok though as I see it.

    When I first built out my iSCSI SAN I did so on the same standard Cisco switches I wasusing in the data network, but kept it physically seperate. I didnt have political reason of

    course unless I wanted to be in a political battle with myself since I also work on the data

    network. I just knew the data network was peaked out and not ideal to handle the load.Now we are upgrading the infrastructure and bringing the SAN back into the mix. Thats

    the kind of flexibility I like about iSCSI.

    Is Unified Fabric an Inevitability?

    Friday, February 20, 2009 inGestalt,Storagebyslowe|11 comments

    http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42773http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42773http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42773http://blog.scottlowe.org/category/gestalt/http://blog.scottlowe.org/category/gestalt/http://blog.scottlowe.org/category/gestalt/http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/author/slowe/http://blog.scottlowe.org/category/storage/http://blog.scottlowe.org/category/gestalt/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/comment-page-1/#comment-42773
  • 8/3/2019 FCoE iSCSI

    21/27

    So heres another thinking out loud post. This time, Im thinking about Fibre Channel over

    Ethernet (FCoE) and unified fabric.

    I was going back through a list of blog posts and articles that I wanted to read and think on, and Icame across a link to Dave Grahams article titledMoving a Fabric forward: FCoE Adoption and

    other Questions. His blog entry was partially in response to myFCoE discussion post. His postgot me thinking again.

    It seems like anytime someone talks about FCoE, they end up also talking about unified fabric.After having read a number of different articles and posts regarding FCoE, I can see where FCoEwould be attractive to shops with significant FCP installations. In my mind, though, this doesntnecessarily mean unified fabric. Given the political differences in organizationsthink thestorage team and the networking teamhow likely is it that an organization may adoptFCoE, but not unified fabric? Or how likely is it that an organization may adopt FCoE, intendingit to be a transitional technology leading to unified fabric, but never actually make it all the way?

    So heres my question: is unified fabric an inevitability?

    (Oh, and heres a related question: Most people cite VoIP as proof that the unified fabric isinevitable. More so than anything else, I believe VoIPs success was more a reflection of the

    rising importance of TCP/IP networking. If so, does that give iSCSI an edge over FCoE? IsiSCSI the VoIP of the storage world?)

    Tags:FCoE,FibreChannel,iSCSI,Storage

    11 comments

    Comments feed for this article

    Trackback link:http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/trackback/

    1.Stephen FoskettonFriday, February 20, 2009 at 10:09 am

    I wouldnt say that a unified fabric is necessarily an inevitability, but I do think that the

    Data Center Bridging protocols and converged network adapters are inevitable. Sowhether you want to unify storage and networking or use FCoE, you will definitely havethe capability to unify them. And once you have an FCoE-capable network, there will bean inevitable pull towards using it.

    2.

    http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/#commentshttp://flickerdown.com/2008/12/moving-a-fabric-forward-fcoe-adoption-and-other-questions/http://flickerdown.com/2008/12/moving-a-fabric-forward-fcoe-adoption-and-other-questions/http://flickerdown.com/2008/12/moving-a-fabric-forward-fcoe-adoption-and-other-questions/http://flickerdown.com/2008/12/moving-a-fabric-forward-fcoe-adoption-and-other-questions/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/http://blog.scottlowe.org/tag/fcoe/http://blog.scottlowe.org/tag/fcoe/http://blog.scottlowe.org/tag/fcoe/http://blog.scottlowe.org/tag/fibrechannel/http://blog.scottlowe.org/tag/fibrechannel/http://blog.scottlowe.org/tag/fibrechannel/http://blog.scottlowe.org/tag/iscsi/http://blog.scottlowe.org/tag/iscsi/http://blog.scottlowe.org/tag/iscsi/http://blog.scottlowe.org/tag/storage/http://blog.scottlowe.org/tag/storage/http://blog.scottlowe.org/tag/storage/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/feed/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/feed/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/trackback/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/trackback/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/trackback/http://blog.fosketts.net/http://blog.fosketts.net/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43671http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43671http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43671http://www.storagerap.com/http://blog.fosketts.net/http://www.storagerap.com/http://blog.fosketts.net/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43671http://blog.fosketts.net/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/trackback/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/feed/http://blog.scottlowe.org/tag/storage/http://blog.scottlowe.org/tag/iscsi/http://blog.scottlowe.org/tag/fibrechannel/http://blog.scottlowe.org/tag/fcoe/http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/http://flickerdown.com/2008/12/moving-a-fabric-forward-fcoe-adoption-and-other-questions/http://flickerdown.com/2008/12/moving-a-fabric-forward-fcoe-adoption-and-other-questions/
  • 8/3/2019 FCoE iSCSI

    22/27

    Marc FarleyonFriday, February 20, 2009 at 1:00 pm

    Hi Scott, I think FCoE is likely to survive and thrive mostly because iSCSI isnt getting

    the same amount of technology and market development. iSCSI is mostly cooked and

    even though there may be the potential to develop it further, nobody seems to have the

    motivation to do it. The amount of money spent to develop and sell technology matters agreat deal. The storage industry is investing in FCoE today.

    Unified Fabric is another matter. This seems to be mostly Ciscos initiative. If it is slow

    selling initially and if Ciscos R&D expenses for it are high and the IT market contracts(as it appears it might be doing)the question is how long will Cisco continue to investin it. During that time, competitors may be able to come up with alternative products thatdont require as much investment and are less disruptive. If it turns out to be a big money

    loser for Cisco by mid 2010, it might not make it.

    3.David MagdaonFriday, February 20, 2009 at 7:12 pm

    I would think its the opposite: I think iSCSI is likely to have a higher market share,mainly for the reason that its routable.

    The storage industry may be investing in FCoE, but theyve already invested in iSCSI as

    well. NetApp and EMC have targets (as well as Sun in their 7000 line), and initiators areavailable for all OSes as well as VMware.

    I dont think its going to be either/or, but different people will choose different things fortheir needs.

    I think things will generally standardize on Ethernet as well, though Infinibad will have adecent minority stake in specialized markets. Theyre talking Ethernet at speeds fasterthan 10Gb, but IB is already at 24 and 96 Gb. Some people need that.

    4.TimC onSunday, February 22, 2009 at 9:16 pm

    The real question is, if were going to unify the fabric, why are we using ethernet? Why

    not infiniband? You can run all of the same protocols at about 4x the bandwidth.

    5.

    http://www.storagerap.com/http://www.storagerap.com/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43674http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43674http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43674http://www.magda.ca/http://www.magda.ca/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43676http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43676http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43676http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43685http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43685http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43685http://blogs.cisco.com/datacenterhttp://www.magda.ca/http://blogs.cisco.com/datacenterhttp://www.magda.ca/http://blogs.cisco.com/datacenterhttp://www.magda.ca/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43685http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43676http://www.magda.ca/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43674http://www.storagerap.com/
  • 8/3/2019 FCoE iSCSI

    23/27

    Omar SultanonMonday, February 23, 2009 at 2:21 am

    Scott:

    I think unified fabric will continue to gain momentum because of the potential to reduce

    TCO by simplifying infrastructure and also for the functional advantages of having allyour initiators be able to talk to all of your targets.

    That being said, I dont believe this needs to be an either/or debate. If you fast forward afew years, I think the typical enterprise DC will have a mix of FCoE, iSCSI and FCP.Each has its own place and I think they can happily co-exist the same way FC SANs andfliers co-exist today.

    If customers do not have existing FC SANs and are not good candidates for FC, then, myguess would be they will either go with iSCSI or wait not native-FCoE targets.

    I am not sure I can think of a scenario where I would see a customer deploy a parallel,dedicated 10GbE FCoE network (i.e. deploy FCoE but not as a unified fabric). I am notsure there is an upside for the storage team and I am pretty sure the network team wouldthrow up all over it.

    Omar SultanCisco

    6.sloweonMonday, February 23, 2009 at 7:31 am

    Omar,

    Thanks for your response. Given that FCoE is inherently compatible with FCP (to myunderstanding they are almost identical except for the physical transport), it seemsreasonable to me that an organization may deploy FCoE as an extension to an existingFCP SAN but not necessarily move to unified fabric (at least, not initially).

    Id be interested to hear why you dont think that is reasonable. Can you share your

    thoughts?

    7.Nate onMonday, February 23, 2009 at 12:45 pm

    http://blogs.cisco.com/datacenterhttp://blogs.cisco.com/datacenterhttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43689http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43689http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43689http://blog.scottlowe.org/http://blog.scottlowe.org/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43692http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43692http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43692http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43693http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43693http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43693http://blog.scottlowe.org/http://blog.scottlowe.org/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43693http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43692http://blog.scottlowe.org/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43689http://blogs.cisco.com/datacenter
  • 8/3/2019 FCoE iSCSI

    24/27

    Is iSCSI the VoIP of the storage world? My answer, yes. Lets look at VOIP for a

    moment. What makes it great is not that it runs on ethernet, but that it runs on the tcp/ipstack. I dont think unified fabric has much value to organizations unless the stack is alsounified. By running on the tcp/ip stack VOIP could happen with existing networkequipment. iSCSI holds the same advatage (you also get routability). To unify FCoE

    you need special (read expensive) network equipment. FCoE almost seems like agimmick from storage and networking vendors. I can see some value as a transitionaryproduct, but I just dont see how it stands on its own.

    8.Kosh onTuesday, February 24, 2009 at 7:42 pm

    HiIm the infrastructure architect for a large and nationally recognized financialinstitution.

    We expect to converge storage and application networks at the fabric layer i.e. Layer 1eventuallyand yes, converged voice & data via VOIP is seen as the strategicforerunner.

    Our next-generation network plans will be 10G end-to-end and we expect to run storageover that, for both OS and data. We already qualify some Silver and Bronze-classapplications over NFS and iSCSI with GigE and LACP, and expect to be able to useFCoE in future for Gold-class applications.

    We would expect to maintain separation at layer 2 and above, via further deployment of

    802.1q and related protocols end-to-end e.g. MPLS VPNs.

    Our timeframe for this is the next 3-5 years i.e. once the Cisco Nexus has reached thesame level of maturity and $/port that made the 6509 so attractive.

    9.Kosh onTuesday, February 24, 2009 at 7:46 pm

    I should add that at a recent storage summit I was speaking with other enterpriseinfrastructure managers & architects. We had a show of hands on various storage fabrics:

    FC2: Everyone using.FC4: Almost everyone using.FC8: Almost no-one using it or interested.10G: Almost no-one using it but everyone interested.

    http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43704http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43704http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43704http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43705http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43705http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43705http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43705http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43704
  • 8/3/2019 FCoE iSCSI

    25/27

    Architects dont always get our way, but thats the way our informal poll showed us as

    leaning.

    10.David MagdaonTuesday, March 3, 2009 at 9:38 pm

    FCoE may be inherently compatible with FCP, but it seems that the switch companieswill want you to buy special Ethernet+FCoE switches to get a unified fabric. In EMCs

    own words:

    http://www.youtube.com/watch?v=EZWaOda8mVY#t=3m40s

    FCoE: Divergence vs convergence

    Splitter!

    ByChris MellorGet more from this author

    Posted inStorage,25th June 2009 13:53 GMT

    Sign up for The Reg enterprise storage newsletter

    Comment FCoE seems to be a harbinger of network divergence rather than convergence. Afterdiscussion with QLogic and hearing about 16Gbit/s Fibre Channel and InfiniBand as well asFCoE, ideas about an all-Ethernet world seem as unreal as the concept of a flat earth.

    This train of thought started when talking with Scott Genereux, QLogic's SVP for w-w sales andmarketing. It's not what he said but my take on that, and it began when Genereux's EMEAmarket director sidekick Henrik Hansen said QLogic was looking at developing 16Gbit/s FibreChannel products. What? Doesn't sending Fibre Channel over Ethernet (FCoE) and 10Gbit/s,

    40Gbit/s and 100Gbit/s Ethernet negate that? Isn't Fibre Channel (FC) development stymiedbecause all FC traffic will transition to Ethernet?

    Well, no, not as it happens, because all FC traffic and FC boxes won't transition to Ethernet. Weshould be thinking FCaE - Fibre Channel and Ethernet, and not FCoE.

    FC SAN fabric users have no exit route into Ethernet for their FC fabric switches and directorsand in-fabric SAN management functions. The Ethernet switch vendors, like Blade Network

    http://www.magda.ca/http://www.magda.ca/http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43770http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43770http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43770http://www.youtube.com/watch?v=EZWaOda8mVY#t=3m40shttp://www.youtube.com/watch?v=EZWaOda8mVY#t=3m40shttp://forms.theregister.co.uk/mail_author/?story_url=/2009/06/25/fcoe_divergence/http://forms.theregister.co.uk/mail_author/?story_url=/2009/06/25/fcoe_divergence/http://forms.theregister.co.uk/mail_author/?story_url=/2009/06/25/fcoe_divergence/http://search.theregister.co.uk/?author=Chris%20Mellorhttp://search.theregister.co.uk/?author=Chris%20Mellorhttp://search.theregister.co.uk/?author=Chris%20Mellorhttp://www.theregister.co.uk/hardware/storage/http://www.theregister.co.uk/hardware/storage/http://www.theregister.co.uk/hardware/storage/http://www.theregister.co.uk/2009/06/25/http://www.theregister.co.uk/2009/06/25/http://www.theregister.co.uk/2009/06/25/http://go.theregister.com/tl/436/shttp:/www.theregister.co.uk/2010/11/24/enterprise_storage_newsletter/http://go.theregister.com/tl/436/shttp:/www.theregister.co.uk/2010/11/24/enterprise_storage_newsletter/http://twitter.com/intent/tweet?text=FCoE:%20Divergence%20vs%20convergence+http://reg.cx/11yr+via%20@regvulture&related=regvulture,reghardwarehttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/print.htmlhttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/alert.htmlhttp://www.magda.ca/http://twitter.com/intent/tweet?text=FCoE:%20Divergence%20vs%20convergence+http://reg.cx/11yr+via%20@regvulture&related=regvulture,reghardwarehttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/print.htmlhttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/alert.htmlhttp://www.magda.ca/http://twitter.com/intent/tweet?text=FCoE:%20Divergence%20vs%20convergence+http://reg.cx/11yr+via%20@regvulture&related=regvulture,reghardwarehttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/print.htmlhttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/alert.htmlhttp://www.magda.ca/http://twitter.com/intent/tweet?text=FCoE:%20Divergence%20vs%20convergence+http://reg.cx/11yr+via%20@regvulture&related=regvulture,reghardwarehttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/print.htmlhttp://www.theregister.co.uk/2009/06/25/fcoe_divergence/alert.htmlhttp://www.magda.ca/http://go.theregister.com/tl/436/shttp:/www.theregister.co.uk/2010/11/24/enterprise_storage_newsletter/http://www.theregister.co.uk/2009/06/25/http://www.theregister.co.uk/hardware/storage/http://search.theregister.co.uk/?author=Chris%20Mellorhttp://forms.theregister.co.uk/mail_author/?story_url=/2009/06/25/fcoe_divergence/http://www.youtube.com/watch?v=EZWaOda8mVY#t=3m40shttp://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/comment-page-1/#comment-43770http://www.magda.ca/
  • 8/3/2019 FCoE iSCSI

    26/27

    Technologies, aren't going to take on SAN storage management functions. Charles Ferland,BNT's EMEA VP, said that BNT did not need an FC stack for its switches. All it needs to dowith FCoE frames coming from server or storage FCoE endpoints is route the frames correctly,meaning a look at the addressing information but no more.

    Genereux said QLogic wasn't going to put a FC in its Ethernet switches. There is no need to puta FC stack in Ethernet switches unless they are going to be a FCoE endpoint and carry out somekind of storage processing. Neither BNT nor QLogic see their switches doing that. Cisco's Nexusroutes FCoE traffic over FC cables to an MDS 9000 FC box. Brocade and Cisco have the FCswitch and director market more or less sewn up and they aren't announcing a migration of theirSAN storage management functionality to Ethernet equivalents of their FC boxes although,longer term, it has to be on Brocade's roadmap with the DCX.

    Genereux and Hansen said that server adapters would be where Ethernet convergence wouldhappen. The FCoE market is developing much faster than iSCSI and all the major server andstorage vendors will have FCoE interfaces announced by the end of the year. OK, so server

    Ethernet NICs and FC host bus adapters (HBAs) could turn into a single CNA (ConvergedNetwork Adapter) and send out FC messages on Ethernet. Where to?

    They go to a FC-capable device, either a storage product with a native FC interface or an FCOEswitch, like QLogic's product orBrocade's 8000, a top-of-rack-switch which receives generalEthernet traffic from servers and splits off the FCoE frames to send them out through FC ports.

    There's no end-to-end convergence here, merely a convergence onto Ethernet at the server edgeof the network. And even that won't be universal. Hansen said: "There is a market for forconverged networks and it will be a big one. (But) converged networking is not an answer toall... Our InfiniBand switch is one of our fastest-growing businesses.... Fibre Channel is not

    going away; there is so much legacy. We're continuing to develop Fibre Channel. There's lots ofdiscussion around 16Gbit/s Fibre Channel. We think the OEMs are asking for it... Will Ethernetreplace InfiniBand? People using InfiniBand believe in it. Converged networking is not ananswer to everyone."

    You get the picture. These guys are looking at the continuation of networking zones with, so-far,minor consolidation of some FC storage networking at the storage edge onto Ethernet. Is QLogicis positioning FCoE as a FC SAN extension technology? It seems that way.

    Other people suggest that the customer organisational boundaries will also inhibit any end-to-endconvergence onto Ethernet. Will the FC storage networking guys smoothly move over to losslessand low-latency Ethernet even if end-to-end FCoE products are there? Why should they?Ethernet, particularly the coming lossless and low-latency version, is new and untried. Why fixsomething that's not broken? What is going to make separate networking and storageorganisational units work together?

    Another question concerns native FCoE interfaces on storage arrays. If FC SAN storagemanagement functions are not migrating to Ethernet platforms then they stay on FC platformswhich do I/O on FC cables to/from storage arrays with FC ports. So what is the point of array

    http://www.brocade.com/products-solutions/products/switches/product-details/8000-switch/index.pagehttp://www.brocade.com/products-solutions/products/switches/product-details/8000-switch/index.pagehttp://www.brocade.com/products-solutions/products/switches/product-details/8000-switch/index.pagehttp://www.brocade.com/products-solutions/products/switches/product-details/8000-switch/index.page
  • 8/3/2019 FCoE iSCSI

    27/27

    vendors adding FCoE ports? Are we looking at the possiblility of direct FCoE communicationbetween CNA-equipped servers and FCoE-equipped storage arrays, simple FCoE SANs,conceptually similar to iSCSI SANs? Do we really need another block storage access method?

    Where's the convergence here, with block storage access protocols splintering into iSCSI, FCoE

    and FC, as well as InfiniBand storage access in supercomputing and high-performancecomputing (HPC) applications?

    Effectively FCoE convergence means just two things. Firstly and realistically, server edgeconvergence with the cost advantages being limited to that area, to a total cost of ownershipcomparison between NICs + HBAs on the one hand and CNAs on the other, with no otherdiminuition in your FC fabric estate. The second and possible thing is a direct FCoE linkbetween servers and FCoE storage arrays with no equivalent of FC fabric SAN managementfunctionality.

    This could come if IBM adds FCoE ports to its SVC (SAN Volume Controller) so that it can talk

    FCoE to accessing servers and to the storage arrays it manages. Another possible alternativewould be for HDS to add FCoE interfaces to its USP-V and USP-VM controllers, whichvirtualise both HDS and other vendors' storage arrays.

    If customers have to maintain a more complex Ethernet, one doing general LAN access, WANaccess, iSCSI storage and FCoE storage, possibly server clustering, as well as their existing FCinfrastructure then where is the simplicity that some FCoE adherents say is coming? FCoEmeans, for the next few years, minimal convergence (and that limited to the server edge) andincreased complexity. Is that a good deal? You tell me