1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

P2V sizing + capacity planning

Discussion in 'Software' started by ThomasMc, Jul 6, 2010.

  1. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    I’ve been swimming b*lls deep in the lovely subject of performance and I think I understand what my test rig can and can’t do, now my next step is to start profiling the physical servers to see what they are doing. So I need a program that will help me a little on that side of thing, any recommendation?
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  2. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    Look no further than iometer
     
    Certifications: A few
    WIP: None - f*** 'em
  3. SimonD

    SimonD Terabyte Poster Moderator

    3,463
    397
    199
    Iometer will only give you throughput, it won't give you Capacity Planning because it doesn't know what your servers do, only how fast it can transfer data between the SAN\NAS\Servers.

    I would suggest that you look into something like the Microsoft Assessment and Planning tool. It will let you monitor all of your machines and can determine whether they can be virtualised or not (it will give you a breakdown of CPU\RAM and Disk usage).
     
    Certifications: CNA | CNE | CCNA | MCP | MCP+I | MCSE NT4 | MCSA 2003 | Security+ | MCSA:S 2003 | MCSE:S 2003 | MCTS:SCCM 2007 | MCTS:Win 7 | MCITP:EDA7 | MCITP:SA | MCITP:EA | MCTS:Hyper-V | VCP 4 | ITIL v3 Foundation | VCP 5 DCV | VCP 5 Cloud | VCP6 NV | VCP6 DCV | VCAP 5.5 DCA
    WIP: VCP6-CMA, VCAP-DCD and Linux + (and possibly VCIX-NV).
  4. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Thanks for the replies lads, much appreciated, will have to give the MAP beta a go as ver 4 doesn’t like my office 2010 :)
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  5. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    True - I was working under the assumption that Thomas actually meant SAN throughput, rather than planning whether his physical hosts were candidates for P2V. That'll teach me to read posts properly :biggrin

    If that's the case Thomas, then you don't need to bother with any fancy tools to determine P2V readiness - just use some perfmon counters over a week for the servers you're planning on virtualising. I can give you the counters I use if you want them. I've virtualised hundreds of servers using them and never had a problem with them.

    Also, bear in mind that just because Perfmon counters or P2V readiness tools indicate a potential problem virtualising a server (usually high CPU usage) that doesn't mean they're a poor candidate. It might be that the CPU that the original server is running on is old, and much slower than the shiny new procs you've got in your hosts - or that the workload on it is too high because loads of apps are competing for time on it. This is particularly true with development servers - and it's an ideal time to moan at in-house devs (if you have them) and tell them to move code off one server and onto another :biggrin
     
    Last edited: Jul 6, 2010
    Certifications: A few
    WIP: None - f*** 'em
  6. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Very unlike you zeb, must have been the way I asked the question :D. Those counters would be great to have also :)
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  7. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    Nothing to do with the way you asked the question - I just had a blonde (well, grey really) moment.

    I'll dig out my p2v perfmon counters later today and chuck them on here.
     
    Certifications: A few
    WIP: None - f*** 'em
  8. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Nice one, thanks bud.
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  9. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    No worries

    These are the counters I monitor - once a minute for a week gives a great baseline to use (though you can get away with doing it for a shorter period if you must - just ensure you monitor over a couple of days during 'normal' and 'peak' usage)

    Avg CPU use (%)
    Max CPU usage (%)

    Avg RAM usage (Mb)
    Max RAM usage (Mb)

    Avg Disk IO Reads
    Max Disk IO Reads
    Avg Disk IO Writes
    Max Disk IO Writes

    You should use all perfmon counters when analysing for P2V with a degree of caution. For instance, if you average out your disk I/O you might think it's quite high - but it's quite possible that there's a scheduled AV scan or backup taking place on the server. If you discount these, you will more than likely get a much lower average and peak figure.

    Personally, there are only a few server types I'd be lairy about P2Ving.

    Domain Controllers just aren't worth it - I provision them from templates, dcpromo them up and, if necessary, transfer FSMO roles before decommissioning the physical DCs.

    Exchange servers are often problematic - and if you're in an environment where you have Exchange admins they get ridiculously touchy about things like RAID best practice (because they've read it in a book somewhere). Unless you can slice your SAN into different RAID levels you might have problems politically getting your Exchange environment virtualised.

    Citrix servers are a PITA - I've virtualised them in the past, but you're kind of running two 'layers' of virtualisation here (app and physical) and I've often encountered problems with them. I generally steer clear.

    SQL Servers are another grey area. You'll never be able to replicate the sort of disk performance you get from local dedicated DAS unless you spend a fortune on SAN storage, so if you have DBAs, they will moan constantly at you if you P2V a DB server running even a moderate OLTP workload onto a SAN that doesn't have a whole bunch of 15K SAS disks.

    If you do have the luxury of a decent budget, I'd opt for a 'regular' SAN to run your VMFS on, with a second 'performance' SAN to run DB, Exchange etc. storage from and map this via raw LUNs. That will enable you to utilise faster storage for DBs, Exchange etc and not waste it on VMFS that doesn't need it, and to use the funky snapshotting & storage management features of the SAN you choose.

    Almost all other servers are P2V candidates. To give you an idea of what CAN be done, I have 110 servers in my main datacenter. 98 of these are VMs. This includes Exchange, most SQL Servers and all DCs. the only physical boxes I have (apart from the VM hosts themselves, the SANs and the usual firewalls, switches, SSLVPNs and what have you) are Oracle servers, Citrix boxes, network monitoring external to the VI environment and a couple of ridiculously high-usage SQL servers.
     
    Certifications: A few
    WIP: None - f*** 'em
  10. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Excellent post there zeb :D we should be able to p2v most of the servers(following your tips of course), the only one to cause me concern is the sipXecs(VoIP) server but I'm going to do some testing on that one and see how it sits, exchange, BES, sharepoint and office comms are all hosted in the MSCloud so we don't need to worry about them. Thanks Zeb and Simon

    [Added]
    I was thinking of rolling my own T1 storage out of small SSD disks, our DB sizes are pretty small so it wouldn't have to be big disks, do you think this is advisable?
     
    Last edited: Jul 7, 2010
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  11. SimonD

    SimonD Terabyte Poster Moderator

    3,463
    397
    199
    My only concern with SSD is not only the price but also the limited read\writes you have before it starts to degrade. You would need to ensure that whatever OS you're using supports TRIM on the disks as well to allow for the speed to not drop drastically.
     
    Certifications: CNA | CNE | CCNA | MCP | MCP+I | MCSE NT4 | MCSA 2003 | Security+ | MCSA:S 2003 | MCSE:S 2003 | MCTS:SCCM 2007 | MCTS:Win 7 | MCITP:EDA7 | MCITP:SA | MCITP:EA | MCTS:Hyper-V | VCP 4 | ITIL v3 Foundation | VCP 5 DCV | VCP 5 Cloud | VCP6 NV | VCP6 DCV | VCAP 5.5 DCA
    WIP: VCP6-CMA, VCAP-DCD and Linux + (and possibly VCIX-NV).
  12. onoski

    onoski Terabyte Poster

    3,120
    51
    154

    That's impressive Zeb, as I have been told we have to leave servers like the Exchange and other high end usage servers. At work at the moment we've P2V'd most of our servers i.e. Blackberry servers, file servers etc.

    I don't think to be honest I'd be brave to P2V our Exchange server as it's used heavily and it's classed as a critical server. Well as you rightly, said it is really down to your environment and how often the server is being used etc.
     
    Certifications: MCSE: 2003, MCSA: 2003 Messaging, MCP, HNC BIT, ITIL Fdn V3, SDI Fdn, VCP 4 & VCP 5
    WIP: MCTS:70-236, PowerShell
  13. Phoenix
    Honorary Member

    Phoenix 53656e696f7220 4d6f64

    5,726
    175
    221
    I have plenty of 1000+ user exchange systems virtualized now, as well as all the bits that go with it
    i've yet to work in a 100% virtual datacenter though.. getting close though! :)

    PS:// I edited the thread title to make it more appropriate
     
    Last edited: Jul 7, 2010
    Certifications: MCSE, MCITP, VCP
    WIP: > 0
  14. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Ha silly me, I took it for granted that those issues would have been all done and dusted by now, could be a deal breaker, the price was never a problem as I was comparing them to 10/15k small SAS disks. I've got a couple of them arriving tomorrow so I'll get a chance to make them scream.
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  15. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    I'll say one thing though, after looking at all the data I've been collecting it’s pretty embarrassing how under-utilized our whole environment is :oops::biggrin
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  16. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    That's one of the main reasons virtualisation is a no-brainer. An average-sized SME with, say, twenty servers can save an absolute ton of money on hw maintenance alone by ditching three quarters of their tin. Any finance director seeing the results of the average virtualisation POC and still not 'getting it' needs shooting. In fact, I'd go so far as to say that, if I was brought into a company that hadn't gone down the virtual route, went through the virtualisation benchmarking and planning exercise and did all the numbers, presented them to finance and they knocked it back, I'd leave - because there would either be clear cashflow issues that prevented CapEx purchases (to the detriment of the company in the long run), or the finance department would be monumentally stupid.
     
    Certifications: A few
    WIP: None - f*** 'em

Share This Page

Loading...