FC vs iSCSI

Discussion in 'Hardware' started by Theprof, Apr 21, 2011.

  1. Theprof

    Theprof Petabyte Poster

    4,607
    83
    211
    As the title states I am curious to see your opinions on what SAN types you'd go with?

    I realize that the two are different and choosing one over the other is not a simple task these days... We looking into upgrading our existing FC SAN to something like the P4000 which is an iSCSI. Personally I like having HBA's taking away a lot of the CPU work and helping reduce latencies but at the same time if done write iSCSI can have low latencies as well and if done with features like flow control and multipath I/O implemented than we can definitely see great results.

    I am no expert by any means... I am just learning as I go but I would be curious to hear from those who work with these technologies on a daily basis think and recommend... In the past I would hear reasoning such as "What's the size of the company, how much and what kind of data are you keeping on the SAN, etc" and based on those types of questions they would make a decision. Today it's different and with lots of competition we can implement a lot of similar stuff with each solution but which is better? I can see that FC would cost more but is it a better solution?
     
    Certifications: A+ | CCA | CCAA | Network+ | MCDST | MCSA | MCP (270, 271, 272, 290, 291) | MCTS (70-662, 70-663) | MCITP:EMA | VCA-DCV/Cloud/WM | VTSP | VCP5-DT | VCP5-DCV
    WIP: VCAP5-DCA/DCD | EMCCA
  2. dales

    dales Terabyte Poster

    2,005
    51
    142
    if your looking at getting a p4000 you might want to take a look and the emc vnxe 3100 or 5300 range as well. I'm looking at getting maybe a couple of 5300's for our office (for offsite async DR). seems to be a toss up at the mo between the equals and vnxes for the smb market.


    Cant really comment on the difference we I've only ever run iscsi in enviroments with around 10 servers, anything over that and its been FC.
     
    Certifications: vExpert 2014+2015+2016,VCP-DT,CCE-V, CCE-AD, CCP-AD, CCEE, CCAA XenApp, CCA Netscaler, XenApp 6.5, XenDesktop 5 & Xenserver 6,VCP3+5,VTSP,MCSA MCDST MCP A+ ITIL F
    WIP: Nothing
  3. Theprof

    Theprof Petabyte Poster

    4,607
    83
    211
    For DR from my understanding, I could be wrong but it's cheaper to go with an iSCSI as it works over TCP/IP where as for FC you'd need the FCoIP which costs money so I would understand why you'd be going with iSCSI...

    In our case we just looking to get one SAN in one spot.
     
    Certifications: A+ | CCA | CCAA | Network+ | MCDST | MCSA | MCP (270, 271, 272, 290, 291) | MCTS (70-662, 70-663) | MCITP:EMA | VCA-DCV/Cloud/WM | VTSP | VCP5-DT | VCP5-DCV
    WIP: VCAP5-DCA/DCD | EMCCA
  4. onoski

    onoski Terabyte Poster

    3,120
    51
    154
    The decision lies in what technologies you're currently using and how comfortable you're with supporting that technology be it Fibre channel or iSCSI SAN.

    In out SAN environment we use iSCSI and seeing as it supports TCP/IP Ethernet, 10 Gbps performance wise is not bad and the learning curve is not too steep.

    One would also have to take into account task such as site replication and disaster recovery as location and distance plays a big part on either going for iSCSI or fiber channel.

    Fibre channel on the other hand you would need an expert to get it all started and setup. This can also lead to high consultancy fees and possibly ongoing support etc.

    Security can also be a deciding factor as well if talking strictly about Fiber channel as the data can easily be intercepted whilst in transit. There are ways round this but again would involve some sort of extra cost etc.

    I hope am not sounding bias towards fibre channel as that's not my intension just raising reasons as why it would or would not be used in a SAN environment.

    To be honest if your current SAN needs to run on Fibre channel then am sure the reasons would more than suffice or justify the cost.
    This is not to say that iSCSI is cheaper as it isn't but from a learning curve perspective would suit most techs i.e. the networking TCP/IP side of things as it's not as daunting when compared to Fibre channel.

    In a nutshell they both have their benefits as well as disadvantages etc.
     
    Certifications: MCSE: 2003, MCSA: 2003 Messaging, MCP, HNC BIT, ITIL Fdn V3, SDI Fdn, VCP 4 & VCP 5
    WIP: MCTS:70-236, PowerShell
  5. craigie

    craigie Terabyte Poster

    3,020
    174
    155
    Your P4000 is a HP Lefthand I'm assuming, if it's an old version it would be 4GB Fibre.

    Now the standard is 8GB Fibre, with iSCSI you are limited by the number of network cards that can be teamed.

    As Dale says, anything more than 10 Servers, I would be going for Fibre Channel.

    I present solutions regularly and the Fibre Channel on average give you the same performance as Direct Attached Storage, so no performance loss.

    Whereas when you compare the IOPS for iSCSI you have a 30-40% drop in performance in IOPS.

    You also need to factor in your Exchange Database and SQL Database, the number of users etc and servers.
     
    Certifications: CCA | CCENT | CCNA | CCNA:S | HP APC | HP ASE | ITILv3 | MCP | MCDST | MCITP: EA | MCTS:Vista | MCTS:Exch '07 | MCSA 2003 | MCSA:M 2003 | MCSA 2008 | MCSE | VCP5-DT | VCP4-DCV | VCP5-DCV | VCAP5-DCA | VCAP5-DCD | VMTSP | VTSP 4 | VTSP 5
  6. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Really? I would love to see the results from your tests on that one, and then there is 10GbE of course re you NIC teaming comment
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  7. craigie

    craigie Terabyte Poster

    3,020
    174
    155
    There you go mate, bear in mind most people using a SAN will have information on a RAID 5, you might see RAID 10 for SQL.

    I didn't mention 10GBe as normally the switches price them out off the equation.

    [​IMG]
     
    Certifications: CCA | CCENT | CCNA | CCNA:S | HP APC | HP ASE | ITILv3 | MCP | MCDST | MCITP: EA | MCTS:Vista | MCTS:Exch '07 | MCSA 2003 | MCSA:M 2003 | MCSA 2008 | MCSE | VCP5-DT | VCP4-DCV | VCP5-DCV | VCAP5-DCA | VCAP5-DCD | VMTSP | VTSP 4 | VTSP 5
  8. craigie

    craigie Terabyte Poster

    3,020
    174
    155
    Mate, you don't have to use the inbuilt SAN software VSA on SANiQ, you can use inbuilt MS technologies, for your core services e.g. SQL Database Mirroring/Cluster, Exchange 2010 DAG, DFS R, AD, DNS etc.

    Then you can replicate you non core servers on daily/weekly basis e.g. BES, SQL Front End, SharePoint Front End, TS as the data remains stagnant, something like Quest vReplicator can do that for you.

    Just throwing it out there :D
     
    Last edited: Apr 21, 2011
    Certifications: CCA | CCENT | CCNA | CCNA:S | HP APC | HP ASE | ITILv3 | MCP | MCDST | MCITP: EA | MCTS:Vista | MCTS:Exch '07 | MCSA 2003 | MCSA:M 2003 | MCSA 2008 | MCSE | VCP5-DT | VCP4-DCV | VCP5-DCV | VCAP5-DCA | VCAP5-DCD | VMTSP | VTSP 4 | VTSP 5
  9. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Thanks ;)
     
    Last edited: Apr 21, 2011
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  10. Theprof

    Theprof Petabyte Poster

    4,607
    83
    211
    Absolutely... I mean to refer to just SAN to SAN replication rather than just using the MS technologies but yes you're right! :biggrin
     
    Certifications: A+ | CCA | CCAA | Network+ | MCDST | MCSA | MCP (270, 271, 272, 290, 291) | MCTS (70-662, 70-663) | MCITP:EMA | VCA-DCV/Cloud/WM | VTSP | VCP5-DT | VCP5-DCV
    WIP: VCAP5-DCA/DCD | EMCCA
  11. Theprof

    Theprof Petabyte Poster

    4,607
    83
    211
    Some good input here, I appreciate it... learned something new today!
     
    Certifications: A+ | CCA | CCAA | Network+ | MCDST | MCSA | MCP (270, 271, 272, 290, 291) | MCTS (70-662, 70-663) | MCITP:EMA | VCA-DCV/Cloud/WM | VTSP | VCP5-DT | VCP5-DCV
    WIP: VCAP5-DCA/DCD | EMCCA
  12. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Another alternative could be Veeam Backup & Replication for a non SAN to SAN(or if your SANs won't rep to each other), and you also get great features like SureBackup
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  13. Theprof

    Theprof Petabyte Poster

    4,607
    83
    211
    At the moment we're leaning towards the P4000, we like the idea of frame-less storage plus the cost...
     
    Certifications: A+ | CCA | CCAA | Network+ | MCDST | MCSA | MCP (270, 271, 272, 290, 291) | MCTS (70-662, 70-663) | MCITP:EMA | VCA-DCV/Cloud/WM | VTSP | VCP5-DT | VCP5-DCV
    WIP: VCAP5-DCA/DCD | EMCCA
  14. SimonD
    Honorary Member

    SimonD Terabyte Poster

    3,681
    440
    199
    Just remember that you're not stuck with software iSCSI initiators, you can stick hardware ones in that help off load some of the cpu load.

    Personally speaking if your FC infrastructure is in place and working (and capable of being upgraded to 8gb fairly easily) I would probably stick with that.

    If you were implementing a new solution however I would probably opt for the iSCSI solution instead (ease of use\implementation).
     
    Certifications: CNA | CNE | CCNA | MCP | MCP+I | MCSE NT4 | MCSA 2003 | Security+ | MCSA:S 2003 | MCSE:S 2003 | MCTS:SCCM 2007 | MCTS:Win 7 | MCITP:EDA7 | MCITP:SA | MCITP:EA | MCTS:Hyper-V | VCP 4 | ITIL v3 Foundation | VCP 5 DCV | VCP 5 Cloud | VCP6 NV | VCP6 DCV | VCAP 5.5 DCA
  15. craigie

    craigie Terabyte Poster

    3,020
    174
    155
    One of my mates is a Pre Sales Storage Architect for HP and he has mentioned that he would only put in place iSCSI 1GB if performance was not needed.

    For everything else Fibre Channel or 10GB
     
    Certifications: CCA | CCENT | CCNA | CCNA:S | HP APC | HP ASE | ITILv3 | MCP | MCDST | MCITP: EA | MCTS:Vista | MCTS:Exch '07 | MCSA 2003 | MCSA:M 2003 | MCSA 2008 | MCSE | VCP5-DT | VCP4-DCV | VCP5-DCV | VCAP5-DCA | VCAP5-DCD | VMTSP | VTSP 4 | VTSP 5
  16. Phoenix
    Honorary Member

    Phoenix 53656e696f7220 4d6f64

    5,749
    200
    246
    erm, FC? isnt that dead yet?

    new install, go iSCSI, no question unless latancy is an essential factor, then look at FC, but beware of the costs and complexity associated with a dedicated fabric, and its life expectancy with the FCoE introduction

    FC serves very little value over a multi 1GB or 10GB iSCSI deployment these days

    Edit: on Cragies post above, unless your 'performance' requirements are saturating the lync, FC / iSCSI is moot, if you are saturating then look at the next level of bandwidth, if and only if latancy is of a major concern should FC be a requirement, otherwise its just a preference of the individual, not a technical reason for the choice
     
    Last edited: Apr 27, 2011
    Certifications: MCSE, MCITP, VCP
    WIP: > 0
  17. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    One of your mates is a total pillock then. Our entire infrastructure (NASDAQ-traded company, something like 180Tb of storage company-wide, over 500 VMs, Exchange and massively-caned SQL and Oracle DB servers) runs on 1Gb iSCSI. FC has no native benefit over iSCSI and hasn't had for years - unless you count being a lot more expensive and a lot more complex as a 'benefit' (which, I guess in pre-sales land, might well be the truth).
     
    Certifications: A few
    WIP: None - f*** 'em

Share This Page

Loading...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.