1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Resolved ESX won't use entire LUN capacity

Discussion in 'Virtual Computing' started by michael78, Aug 24, 2010.

  1. michael78

    michael78 Terabyte Poster

    2,085
    29
    141
    Strange problem with ESX lab setup using Openfiler. I have setup my RAID 5 config and it all looks fine and when I connect to the LUN and format it via vSphere it sees the capacity as 2.73TB but available space as 745GB. There is nothing on the disks so am stumped. Any ideas?
     
    Certifications: A+ | Network+ | Security+ | MCP | MCDST | MCTS: Hyper-V | MCTS: AD | MCTS: Exchange 2007 | MCTS: Windows 7 | MCSA: 2003 | ITIL Foundation v3 | CCA: Xenapp 5.0 | MCITP: Enterprise Desktop Administrator on Windows 7 | MCITP: Enterprise Desktop Support Technician on Windows 7
    WIP: Online SAN Overview, VCP in December 2011
  2. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    What are your partitions set up like in OpenFiler? I bet you have the Openfiler partitions (boot, root etc) on the same array as your storage. Take a look at the partition table on your OF - pound to a penny your partitioning has something daft like a 1.6Tb "/" partition or similar.
     
    Certifications: A few
    WIP: None - f*** 'em
  3. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    I've seen something similar to this, first time was a non-initialised array and the second was the MSDOS labelling on the disk, changing to GPT solved the second one
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  4. michael78

    michael78 Terabyte Poster

    2,085
    29
    141
    I have openfiler installed on a seperate disk connected via the SATA port on the Motherboard. My disks are connected to a dedicated RAID card. I think I have the answer as in ESX can't use LUN's over 2TB from what I'm told and I've set mine up as one big 3.7TB LUN. I'm going to divide the disk up into smaller chunks tonight and see if that works.
     
    Certifications: A+ | Network+ | Security+ | MCP | MCDST | MCTS: Hyper-V | MCTS: AD | MCTS: Exchange 2007 | MCTS: Windows 7 | MCSA: 2003 | ITIL Foundation v3 | CCA: Xenapp 5.0 | MCITP: Enterprise Desktop Administrator on Windows 7 | MCITP: Enterprise Desktop Support Technician on Windows 7
    WIP: Online SAN Overview, VCP in December 2011
  5. michael78

    michael78 Terabyte Poster

    2,085
    29
    141
    Yep I got stumped with that when I first setup my disks but figured out I needed to change the labelling. Amazing that small things like that can ruin your day when you spend most of it figuring out what the hell is wrong only to find it's the bloody disk label.
     
    Certifications: A+ | Network+ | Security+ | MCP | MCDST | MCTS: Hyper-V | MCTS: AD | MCTS: Exchange 2007 | MCTS: Windows 7 | MCSA: 2003 | ITIL Foundation v3 | CCA: Xenapp 5.0 | MCITP: Enterprise Desktop Administrator on Windows 7 | MCITP: Enterprise Desktop Support Technician on Windows 7
    WIP: Online SAN Overview, VCP in December 2011
  6. SimonD

    SimonD Terabyte Poster Moderator

    3,463
    397
    199
    As a side note whenever I can I always carve up my luns to approx 600gb for ESX, also you should never actually allocate all the disk, so if you have a 2tb disk you should just carve up 1.8tb into 3 600gb usable drives.

    Why? I think mainly because it gives the drive space to do what it needs to in the back ground, if you completely fill up the space with drive allocations you run the risk of the system slowing down.
     
    Certifications: CNA | CNE | CCNA | MCP | MCP+I | MCSE NT4 | MCSA 2003 | Security+ | MCSA:S 2003 | MCSE:S 2003 | MCTS:SCCM 2007 | MCTS:Win 7 | MCITP:EDA7 | MCITP:SA | MCITP:EA | MCTS:Hyper-V | VCP 4 | ITIL v3 Foundation | VCP 5 DCV | VCP 5 Cloud | VCP6 NV | VCP6 DCV | VCAP 5.5 DCA
    WIP: VCP6-CMA, VCAP-DCD and Linux + (and possibly VCIX-NV).
  7. michael78

    michael78 Terabyte Poster

    2,085
    29
    141
    Simon cheers for the advice mate and I'm going to do as you say and carve the disk up. I mainly did it as 1 big LUN just for simplicity which has ended up wasting a lot of time head scratching :oops:
     
    Certifications: A+ | Network+ | Security+ | MCP | MCDST | MCTS: Hyper-V | MCTS: AD | MCTS: Exchange 2007 | MCTS: Windows 7 | MCSA: 2003 | ITIL Foundation v3 | CCA: Xenapp 5.0 | MCITP: Enterprise Desktop Administrator on Windows 7 | MCITP: Enterprise Desktop Support Technician on Windows 7
    WIP: Online SAN Overview, VCP in December 2011
  8. ThomasMc

    ThomasMc Gigabyte Poster

    1,507
    49
    111
    Not really a waste of time, you learn more when things don't go right :)
     
    Certifications: MCDST|FtOCC
    WIP: MCSA(70-270|70-290|70-291)
  9. michael78

    michael78 Terabyte Poster

    2,085
    29
    141
    Yep you have a point :biggrin
     
    Certifications: A+ | Network+ | Security+ | MCP | MCDST | MCTS: Hyper-V | MCTS: AD | MCTS: Exchange 2007 | MCTS: Windows 7 | MCSA: 2003 | ITIL Foundation v3 | CCA: Xenapp 5.0 | MCITP: Enterprise Desktop Administrator on Windows 7 | MCITP: Enterprise Desktop Support Technician on Windows 7
    WIP: Online SAN Overview, VCP in December 2011
  10. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    Similar to Simon, I use 500Gb LUNs. Just makes it easier to maintain (and any VM requiring a disk larger than 500Gb should have that disk on it's own RDM anyway instead of via VMFS)
     
    Certifications: A few
    WIP: None - f*** 'em
  11. Phoenix
    Honorary Member

    Phoenix 53656e696f7220 4d6f64

    5,726
    175
    221
    I disagree with Zebs last point

    I use primarily 500GB luns too, but this is mainly due to me not wanting clients to put more than 8 - 10 VMs per LUN, this is due to SCSI lock/reservation issues with more VMDKS on a single LUN causing excessive queue depths

    you can always have a dedicated VMFS lun for File/Exchange/SQL type drives that need more than 500GB, and by not going the RDM route you cut out a lot of the compatibility fluff that comes with it. VMFS performance is more than adequate for most of these usage scenarios, and if its not I would go direct iSCSI of NPV route for direct attached storage rather than RDM

    But RDM was the only way to go once ;)
     
    Certifications: MCSE, MCITP, VCP
    WIP: > 0
  12. zebulebu

    zebulebu Terabyte Poster

    3,748
    330
    187
    I still put Exchange on an RDM so that I can use the native snapshotting capabilities of my SAN (EqualLogic). Of course, that doesn't make a blind bit of difference in a homebrew set up using OpenFiler :biggrin
     
    Certifications: A few
    WIP: None - f*** 'em
  13. Phoenix
    Honorary Member

    Phoenix 53656e696f7220 4d6f64

    5,726
    175
    221
    Word

    Zeb makes a very good point
    if direct iSCSI and NPIV are not options, RDM lets you use native tools on the SAN (NetApp, Equallogic, maybe Lefthand now, etc) so this one one of the scenarios where thats a more powerful feature :)
     
    Certifications: MCSE, MCITP, VCP
    WIP: > 0
  14. michael78

    michael78 Terabyte Poster

    2,085
    29
    141
    Guys just to let you know I redone my LUN's and split them into 500GB chunks and it all works spiffingly now :biggrin
     
    Certifications: A+ | Network+ | Security+ | MCP | MCDST | MCTS: Hyper-V | MCTS: AD | MCTS: Exchange 2007 | MCTS: Windows 7 | MCSA: 2003 | ITIL Foundation v3 | CCA: Xenapp 5.0 | MCITP: Enterprise Desktop Administrator on Windows 7 | MCITP: Enterprise Desktop Support Technician on Windows 7
    WIP: Online SAN Overview, VCP in December 2011

Share This Page

Loading...