Why not to use SATA

Discussion in 'VMware Certifications' started by Denver Maxwell, Jan 27, 2012.

  1. Denver Maxwell

    Denver Maxwell Nibble Poster

    53
    0
    14
    Why do people assumb that because SATA 7.2k drives are 60% as fast as SAS 10 / 15K that they are a good alternative?
    The problem being that they end up using the extra space to put on extra VM's and end up with piss poor performance.

    View attachment 2578

    Little graphic above may make more sense?
     
    Certifications: VMware VCP v5, GVF Level 3a, ITIL V3, Windows 2008 something or other...
    WIP: Prince 2? CCENT? mmm donno
  2. SimonD
    Honorary Member

    SimonD Terabyte Poster

    3,681
    440
    199
    It's more often than not to do with cost, cheaper disks allow for more spindles which give you better performance (generally speaking). Yes you could go down the SAS route but the cost of those disks when compared to the SATA ones is quite large, another thing to take into consideration with the SAS disks is the amount you would require if you have a large VM environment, more so if you're using snapshots.

    Another thing to consider is that people also use tiered storage. for high IOPS VM's you're going to be sticking them on a shelf with lots of fast disk (whether thats SAS or SSD), you're going to be using slower SATA for your low IOPS machines, as an example you wouldn't use SATA for your Exchange or SQL environments but you would for your file or AD server.

    SATA, SAS and SSD have their place in SAN storage environments, it's knowing when to use what thats more important.
     
    Certifications: CNA | CNE | CCNA | MCP | MCP+I | MCSE NT4 | MCSA 2003 | Security+ | MCSA:S 2003 | MCSE:S 2003 | MCTS:SCCM 2007 | MCTS:Win 7 | MCITP:EDA7 | MCITP:SA | MCITP:EA | MCTS:Hyper-V | VCP 4 | ITIL v3 Foundation | VCP 5 DCV | VCP 5 Cloud | VCP6 NV | VCP6 DCV | VCAP 5.5 DCA
  3. Denver Maxwell

    Denver Maxwell Nibble Poster

    53
    0
    14
    Indeed, the problem is a lot of people end up just using large SATA / 7.2k SAS and expect it to perform. Price depends on the vendor, but there is not all that much difference in price, not as much as might be expected, (excluding the present situation on hdd's of course), depending on vendor and discounts.

    the problem with the more spindles approch is that people end up using all the storage...

    I guess a lot of it is to do with consolidation ratios and costs, dont get me wrong there is good reason to use low performance disks when that is the requirement for such things as data archival and backups or in small office deployments. But for highly consolidated high usage VMs its not the thing to use.

    I was just trying to show the relative performance with the graphic above. So many people especialy sales people just dont get it at all.
     
    Certifications: VMware VCP v5, GVF Level 3a, ITIL V3, Windows 2008 something or other...
    WIP: Prince 2? CCENT? mmm donno
  4. craigie

    craigie Terabyte Poster

    3,020
    174
    155
    Sorry to say that chart is way off, a more reasonable IOPS performance is from Duncan Epping at Yellow Bricks IOps? - Yellow Bricks

    Another really important factor is back end and front end IOPS which are hugely different. Back end is raw and front end is what is delivered to the application e.g. File Share.

    To give you an idea of the difference on a solution, I have recently sized. DR was sized for cost rather than performance as if the customer was in DR for a prolonged period we can add a couple of 15K shelves and the HP 3PAR will automatically optimize blocks onto the correct storage tier.

    300GB 15K SAS 70/30 Read/Write Ratio

    HP 3PAR F400 128 15K SAS Drives, will give 25,600 IOPS Back End and approximately 9,310 IOPS Front End.

    This is before caching and we would expect this performance increase to be 30% higher on the Front End.

    Usable space after RAID 5 (7 Shelves + 1) 30,242TB


    2TB 7.2K NRL 70/30 Read/Write Ratio

    HP 3PAR F400 32 7.2K NRL Drives, will give 2,400 IOPS Back End and approximately 801 IOPS Front End.

    This is before caching and we would expect this performance increase to be 30% higher on the Front End.

    Usable space after RAID 5 (2 Shelves + 1) 44,252 TB

    You can do the maths to see if we increase the number of spindles in the 7.2K NRL they will never perform to the same level of 15K SAS.
     
    Certifications: CCA | CCENT | CCNA | CCNA:S | HP APC | HP ASE | ITILv3 | MCP | MCDST | MCITP: EA | MCTS:Vista | MCTS:Exch '07 | MCSA 2003 | MCSA:M 2003 | MCSA 2008 | MCSE | VCP5-DT | VCP4-DCV | VCP5-DCV | VCAP5-DCA | VCAP5-DCD | VMTSP | VTSP 4 | VTSP 5
  5. craigie

    craigie Terabyte Poster

    3,020
    174
    155
    When I'm talking to customer about storage, I also talk them about not just the technical aspect but also the business aspect.

    How much of a performance hit can you take if a drive controller fails? As on all SAN's if a controller fails it removes caching to prevent data loss and therefore has a large performance hit. Do they then need to look at two SAN's or a SAN that can have mesh controllers (more than 2 to prevent cache performance hits).

    Do they need to be able to snapshot the SAN for backups?

    Do they need to use SAN based replication for DR, does this need to synch or a synch?

    Does the SAN need to work with any vendor based technology for DR, will it be compatible?

    How does the business expect data growth over the next 3 - 5 years, as we all know they will have more data rather than less. Is the SAN capable of being sized to this level, what are the costs is it just disks or are new chassis required, if so at what point?

    What bandwidth do they need the SAN to perform at? Will multipath do or do they require AULA?

    Is Fiber Channel or iSCSI appropriate?

    What IOPS do they require FE and BE?
     
    Certifications: CCA | CCENT | CCNA | CCNA:S | HP APC | HP ASE | ITILv3 | MCP | MCDST | MCITP: EA | MCTS:Vista | MCTS:Exch '07 | MCSA 2003 | MCSA:M 2003 | MCSA 2008 | MCSE | VCP5-DT | VCP4-DCV | VCP5-DCV | VCAP5-DCA | VCAP5-DCD | VMTSP | VTSP 4 | VTSP 5
  6. Phoenix
    Honorary Member

    Phoenix 53656e696f7220 4d6f64

    5,749
    200
    246
    I don't know many people serious about virtualization that assume they can dump all their VMs on SATA without regard for it
    the average IOP of a 7.2k NL-SAS/SATA drive is 80, rising to ~180 for a 15k SAS drive
    the capacity difference between those drives is huge though, the larger SAS drives being 600GB these days, compared to 3TB on the SATA front
    Generally these days we employ tiered solutions as the vast majority of workloads really don't need masses if IO most of the time, and tiering gives us flexibility that didn't previously exist

    That said the discussion needs to be had about the applications, not just the VM, not just the OS, not even just the server system in play
    the actual USER application and experience is all the matters at the end of the day, and those calculations are unique in every environment, without decent up front consulting around that aspect you will more often than not deliver a **** result
     
    Certifications: MCSE, MCITP, VCP
    WIP: > 0
  7. Denver Maxwell

    Denver Maxwell Nibble Poster

    53
    0
    14
    Hi Craigie
    Like the link, mine is more of an example to get people to think, getting anything other than estimations requires testing... and lots of it over differing systems.
    Note that i never even brought Raid Level and Cache sizing into it (or cache configuration for that matter). Sequential or Random IO :S theres also the SFF/LFF to consider.

    I am a fan of multi tiered storage systems, and i realy like the prospect of a storage subsystem doing the hard work for me when it comes to migrating the heavily utillised data to high performance storage such as SSD especialy when its just the data thats hit hard and not the entire VM / filesystem.

    The only problem with the High end SAN storage and its niceitys is that its not the same for the low level storage configurations, a SMB may only have a dozen or so VM's that they wish to have a low end SAN connected to for some of the nice vMotion like features. They may only be considering half a dozen disks, they may or may not need lots of IO's or data throughput. 7.2k disks may be fine to start with and have lots of extra space to grow into (this being the issue), the 60% of the performance of the lower speck disks that have 5 or 6 times the capacity can fairly shortly end in a bad way. It does all come down to good old analysis of requirements, now and for a few years down the line.

    Must say im looking forward to what comes next in the SAN / Storage market, especialy the lower end of that market, with the performance of SSD and the desire to virtulise even small SMB clients. When a high end consumer level SSD could annihilate the performance of a low end SAN surely its time for some real changes in the storage market (and i do mean from the big vendors).
     
    Certifications: VMware VCP v5, GVF Level 3a, ITIL V3, Windows 2008 something or other...
    WIP: Prince 2? CCENT? mmm donno

Share This Page

Loading...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.