VMware – UUID know the facts

If you ever moved a virtual machine to a new location manually or via SAN replication you need to be clear about UUID.

UUID = universal unique identifier (UUID)

When you first fire up the virtual machine you will get this pop up message.

Now if you want to keep the UUID the same, then select “I moved it” otherwise leave it on the default “I copied it”.

When you select “I moved it” the UUID fro BIOS and Ethernet is not changed and so if you ware running an Windows OS it will not see any modification to it’s Hardware, otherwise it will ask to be re-activated you select “I copied it” which will change every UUID in the VMX file.

VMware - UUID 01

To see the UUID in Windows run the following command from the command line.

Open DOS Prompt and type in “wmic”

once in the tool type in “csproduct” and press return. It will now show you the UUID in Windows.

VMware - UUID 02

To see the UUID via SSH on the ESXi server of the virtual machine, you must first power it off otherwise the VMX file will be locked.

VMware - UUID 03

VMware – Creating and deleting a snapshot with RDM disk attached

As you can see from Fig 1 the TEST server I have as 1 virtual vmdk file and the other one is a RDM physical disk.

Fig 1

VMware - VM hardware 1

Fig 2

VMware - VM hardware 2

If you tried to take a snapshot with the “snapshot the virtual memory” ticked you would get the error message below in Fig 4.

Fig 3

VMware - VM hardware 11

Fig 4

VMware - VM hardware 12

If you choose “Quiesce guest file system” then you can take a snapshot with an RDM disk attached to the virtual machine.

Fig 5

VMware - VM hardware 17

As you can see I’ve created my “TEST1” snapshot.

Fig 6

VMware - VM hardware 13

Now if you want to delete the snapshot you can go to snapshot manager and select the snapshot you want to remove and then press the delete button.

If you pressed the “Go to” button then your virtual machine would go back to the original state before the snapshot.

Fig 7

VMware - VM hardware 14

Now the snapshot has been deleted. Just remember that VMware best practise is to only keep the snapshot no more then 24-72 hours and 1-2 snapshots per virtual machine, to avoid long deletion time or consolidation time.

Fig 8

VMware - VM hardware 15

VMware – Snapshot delta showing different size?

Following on from my last post on VMware – Snapshots, which option to choose when a machine is powered off?

The options in the snapshot window are grayed out, this is because the machine is powered off and so there is no reason to “quiesce” the disk.

vCenter - snapshot windows

But I have a question for you as I noticed something weird?

Why is it that Fig 1 shows 1 delta vmdk disk to be 4MB the other one to be only 64K, also Fig 2 shows both delta vmdk file to be 4MB. 

Fig 1

vCenter - snapshot size 1

Fig 2

vCenter - snapshot size 2

VMware – Snapshots, which option to choose when a machine is powered off?

Now tonight I got a dilemma, which option to choose when taking a snapshot when a virtual machine is powered off?

I think the best option to take is to do a disk snapshot because there will be nothing in memory of the virtual machine when it’s powered off.

What is your thoughts on this, let me know?

Asigra – DS-Client is in standby mode

Today I had an very interesting day with the DS-Client telling me that DS-Client went into standby mode.

Asigra DS-Client error

I really did not know what went wrong at first, but I did the following to try and resolve the issue.

1: Logged into the SQL Management Studio and run this query against the dsclient DB.

1.1: select db_number from setup;

This showed me that the db_number was “250” which then which was the same on the DS-Operator, so didn’t need to apply any DB patches.

2:  I also ran the “dsclni.exe” from the DS-Client folder “C:\Program Files\CloudBackup\DS-Client\dsclni.exe”.

3: Still did not work so I restarted the “dsclient” service on the DS-Client server.

That did the trick and the DS-Client re-established communication with the DS-Operator. But you would think that was the end of it, oh no.

I tried to do a test backup and it kept telling me it “failed to connect to DS-System”. At this point I was scratching my head, I nearly run out of idea. I thought to myself if I “un-register” the client from “DS-Operator” and then re-register it again from the DS-Client that might work and it did.

VMware – ESXi 5.0 Configuration Maximums

This is really here to help quickly find the maximum configurations ESXi 5.0 has to offer. This information here has been grabbed from O’Reilly VMware Cookbook for my own reference and if anyone else wants to.

Virtual Machine Maximum

Number of virtual CPUs per virtual machine = 32
RAM per virtual machine = 1TB
Swap file size = 1TB
Virtual SCSI adapters per virtual machine = 4
Virtual SCSI targets per virtual SCSI adapter = 15
Virtual SCSI targets per virtual machine = 60
Virtual SCSI targets per virtual machine = 60
Virtual disks per virtual machine (PVSCSI) = 60
Virtual disk size 2TB – 512 bytes
Number of IDE controllers per virtual machine = 1
Number of IDE devices per virtual machine = 4
Number of floppy devices per virtual machine = 2
Number of floppy controllers per virtual machine = 1
Number of virtual NICs per virtual machine = 10
Number of serial ports per virtual machine = 4
Number of remote consoles to a virtual machine = 40
Number of USB controllers per virtual machine = 1
Number of USB devices connected to a virtual machine = 20
Number of parallel ports per virtual machine = 3
Number of USB 3.0 devices connected to a virtual machine = 1
Number of xHCI USB controllers = 20
Maximum amount of video memory per virtual machine 128MB

ESX Host Maximum

Logical CPUs per physical ESXi host = 160
Virtual Machines per physical ESXi host = 512
Virtual CPUs per physical ESXi host = 2,048
Virtual CPUs per physical ESXi core = 25
Fault tolerance virtual disks per physical ESXi host = 16
Fault tolerance virtual CPUs per physical ESXi host = 1
Maximum RAM per fault tolerant virtual machines = 64GB
Maximum Fault Tolerant virtual machines per physical ESXi host = 4

Memory maximums

RAM per physical ESXi host = 2TB
Number of swap files per physical ESXi host = 1 per virtual machine
Maximum swap file size = 1TB

iSCSI physical storage maximums

LUNs per physical ESXi server = 256
Qlogic 1Gb iSCSI HBA initiator ports per ESXi server = 4
Broadcom 1Gb iSCSI HBA initiator ports per ESXi server = 4
Broadcom 10Gb iSCSI HBA initiator ports per ESXi server = 4
NICs that can be associated with or bound to the software iSCSI stack =8
Number of total paths on a physical ESXi server = 1,024
Number of paths to a LUN (software and hardware iSCSI) = 8
Qlogic iSCSI: dynamic targets per adapter port = 64
Qlogic iSCSI: static targets per adapter port = 62
Broadcom 1Gb iSCSI HBA targets per adapter port = 64
Broadcom 10Gb iSCSI HBA targets per adapter port = 128
Software iSCSI targets = 25

NAS storage maximums

NFS mounts per physical ESXi host = 256

Fibre Channel storage maximums

LUNs per physical ESXi host = 256
LUD ID per physical ESXi host = 255
Number of paths to a LUN = 32
Number of total paths on an ESXi host = 1,024
Number of HBAs of any type = 8
HBA ports per physical ESXi server = 16
Targers per HBA adapter = 256

VMFS 5 storage maximums

Volume size = 64TB
Raw device mapping size (virtual) = 2TB – 512 bytes
Raw device mapping size (physical) = 64TB
Block size = 1MB
File size = 2TB – 512 bytes
Files per volume = ~130,960 files

Storage DRS maximums

Virtual disks per datastore cluster = 9,000
Datastores per datastore cluster = 32
Datastore clusters per vCenter = 256

Storage concurrent operations

Concurrent vMotion operations per datastore = 128
Concurrent storage vMotion operations per datastore = 8
Concurrent storage vMotion operations per ESXi host = 2
Concurrent non-vMotion provisioning operations per host = 8

VMDirect path limits

VMDirectPath PCI/PCIe devices per host = 8
VMDirectPath PCI/PCIe devices per virtual machine = 4

vSphere standard and distributed switch maximums

Total virtual network switch ports per host (VDS and VSS ports) = 4,096
Maximum active ports per host (VDS and VSS) = 1,016
Virtual network switch creation ports per standard switch = 4,088
Port groups per standard switch = 256
Distributed virtual network switch ports per vCenter Instance = 30,000
Static port groups per vCenter Instance = 5,000
Ephemeral port groups per vCenter = 256
Hosts per VDS switch = 350
Distributed switches per vCenter instance = 32

vCloud director maximums

Virtual machine count = 20,000
Powered-on virtual machine count = 10,000
Organizations = 10,000
Virtual machines per vApp = 64
vApps per organization = 500
Number of networks = 7,500
Hosts = 2,000
vCenter servers = 25
Virtual data centers = 10,000
Datastores = 1,024
Catalogs = 1,000
Media = 1,000
Users = 10,000

Cheap Home Lab Setup

If you want to build your own home lab environment to practise VMware ESXi and vCenter then below you will find a list of kit that I would recommend you buy if you have a budget to stick to.

I really do not want to spend too much on kit so I’ve searched loads of motherboards, cpu and memory and came up with the list below. Now you can install ESXi on USB stick and buy yourself a cheap unmanaged 1Gig switch and you are done.

Specification

1: Komputerbay 16GB (2×8) DDR3-12800 1600MHz DIMM from Amazon.co.uk @ £39.99

2: Gigabyte GA-Z77N-WIFI, Intel Z77, S1155 from Scan.co.uk  @ £88.52

3: Intel Core i3 3220 Ivy Bridge Dual Core Processor from Scan.co.uk @ £88.96

4: CIT MTX001B, Black mini-ITX Case with 300W PSU from Scan.co.uk @ £27.47

VMware – Snapshots explained

In VMware the snapshot file grow by 16MB block size.

Now there is no formula out there that you can use to predetermine the size of the snapshot beforehand. 

VMware’s best practice is to keep the snapshot no more then 24-72 hours & 1-2 snapshots per VM. Snapshot files will grow to the maximum size of the original disk, to avoid any problems and to mitigate future issues either delete the snapshot or consolidate it when you are happy that your VM is running as it should be.
Now I’ve taken some screenshots of a virtual machine with the memory and quiesce options to show the size of the snapshots.
1: What the snapshot options look like.
 VMware - snapshot window
2: This shows the state of the files before any snapshots. See that the vswp file is 4GB in size, that is the memory that was allocated to the server during creation.
VMware - snapshot 1
3: The 1st snapshot I took was with the memory option which by the way is the default. As you can see the size of the delta.vmdk file is 16MB and that the vmsn file is 4GB in size, which is the size of the memory in the virtual server.
 VMware - snapshot 2

4: The 2nd snapshot was taken with only the “quiesce” option ticked and as you can see the delta.vmdk file is still 16MB but the vmsn file is only 31K. The reason is because there were no real disk I/O happening on the server at the time of the snapshot.
VMware - snapshot 3

5: The 3rd snapshot was taken with both the “memory& quiesce” options ticked. The delta.vmdk is still 16MB, but the vmsn file is a combination of the memory and disk state.
VMware - snapshot 4

VMware – RDM (Raw Device Mapping)

About Raw Device Mapping

An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM allows a virtual machine to directly access and use the storage device. The RDM contains metadata
for managing and redirecting disk access to the physical device.

The file gives you some of the advantages of direct access to a physical device while keeping some advantages of a virtual disk in VMFS. As a result, it merges VMFS manageability with raw device access.

RDMs can be described in terms such as mapping a raw device into a datastore, mapping a system LUN, or mapping a disk file to a physical disk volume.

Physical Compatibility Mode

In physical mode the SCSI commands are passed via the VMkernel, as all the hardware characteristics are exposed to the VM.
VMFS-5 supports up to 64TB disks size.

Can not convert a 2TB+ RDM disk to a virtual disk or clone the VM for that fact, it is not supported.
No snapshot can be taken with PCM disks.
MS Clustering is supported in PCM.

Virtual Compatibility Mode

In virtual compatibility mode the maximum VMDK file can only be 2TB – 512 in size.
Virtual machine snapshots are available for RDMs with virtual compatibility mode
You can storage vMotion VCM disk to other datastores.
MS Clustering is not supported in VCM.

VMware – Recommended disk or LUN sizes for VMware ESX/ESXi installations

VMware ESX 3.0.3 and 3.5

VMware ESX 3.x requires a minimum of approximately 8GB.

System Disk Partitions

100MB boot partition
5GB system root partition
VMFS partition, if defined, spans the remainder of the disk
Extended partition
System Disk Logical Partitions

1GB swap partition
2GB system log partition
110MB VMkernel core dump partition

VMware ESX 4.0 and 4.1

For VMware ESX 4.x, the ESX Console OS is instead situated within a virtual machine disk (.vmdk) file on a VMFS file system. The size of this disk file varies between deployments, based on the size of the logical unit used. A minimum requirement is approximately 8GB.

Notes:
The stored Console OS virtual machine disk files may increase in size over the course of a deployment, accommodating for additional log data and files.
It can be stored on a SAN LUN or different block device than the system disk, as long as it has been partitioned and formatted as VMFS.
The Console OS disk, as a best practice, should not be situated on a shared SAN LUN.

System Disk Partitions

1100MB boot partition
110MB VMkernel core dump partition
Extended partition spans the remainder of the disk
Logical VMFS partition spanning the remainder of the extended partition
VMFS Partition / Console OS VMDK Partitions

These are partitions that reside within the Console OS VMDK, stored on the formatted VMFS volume:
600MB swap partition
2GB system log partition
Extended partition spanning remainder of Console OS .vmdk file
5GB system (root) partition
VMware ESXi 3.5, 4.0, and 4.1 (Installable)

VMware ESXi installations benefit from reduced space and memory requirements due to the omission of the Console OS. It requires approximately 6GB of space without a defined VMFS partition or datastore.

When additional block devices are provided, they may be formatted and utilized as VMFS, or in some cases as additional scratch storage space.

System Disk Partitions

4.2MB FAT boot partition
Extended partition
4.3GB FAT partition for scratch storage and swap
Remainder of device may be formatted as VMFS

Note: The minimum size for a VMFS datastore is approximately 1GB.
System Disk Logical Partitions

250MB FAT partition for a hypervisor bootbank
250MB FAT partition for a second hypervisor bootbank
110MB diagnostic partition for VMkernel core dumps
286MB FAT partition for the store partition (VMware Tools, VMware vSphere/Infrastructure Client, core)

VMware ESXi 3.5, 4.0, and 4.1 (Embedded / USB)
Embedded VMware ESXi installations typically utilize approximately 1GB of non-volatile flash media via USB. An additional 4-5GB of space may be defined on local storage for additional scratch storage and swap to be stored.

Persistent logging is not included with embedded ESXi. VMware recommends configuring remote syslog services for troubleshooting or when anticipating technical issues. For additional information, see Enabling syslog on ESXi (1016621).
USB Device Primary Partitions

4.2MB FAT boot partition
Extended partition
USB Device Logical Partitions

250MB FAT partition for a hypervisor bootbank
250MB FAT partition for a second hypervisor bootbank
110MB diagnostic partition for VMkernel core dumps
286MB FAT partition for the store partition (VMware Tools, VMware vSphere/Infrastructure Client, core).
Local Disk Partitions (If Present)

4.3GB FAT partition for scratch storage and swap
110MB diagnostic partition for VMkernel core dumps
Remainder of device may be formatted as VMFS

Note: The minimum size for a VMFS datastore is approximately 1GB.
For additional information on these installation requirements, see the respective installation guide for your chosen product in the VMware Documentation pages.

VMware ESXi 5.0 (Installable)

For fresh installations, several new partitions are created for the boot banks, the scratch partition, and the locker.

Fresh ESXi installations use GUID Partition Tables (GPT) instead of MSDOS-based partitioning. The partition table itself is fixed as part of the binary image, and is written to the disk at the time the system is installed. The ESXi installer leaves the scratch and VMFS partitions blank and ESXi creates them when the host is rebooted for the first time after installation or upgrade.

One 4GB VFAT scratch partition is created for system swap. See “About the Scratch Partition,” in the vSphere Installation and Setup Guide.
The VFAT scratch partition is created only on the disk from which the ESXi host is booting. On the other disks, the software creates a VMFS5 partition on each disk, using the whole disk.

During ESXi installation, the installer creates a 110MB diagnostic partition for core dumps.

VMware ESXi 5.0 (Embedded/USB)

One 110MB diagnostic partition for core dumps, if this partition is not present on another disk. The VFAT scratch and diagnostic partitions are created only on the disk from which the ESXi host is booting. On other disks, the software creates one VMFS5 partition per blank disk, using the whole disk. Only blank disks are formatted.

In ESXi Embedded, all visible blank internal disks with VMFS are also formatted by default.

vSphere 5.0 supports booting ESXi hosts from the Unified Extensible Firmware Interface (UEFI). With UEFI you can boot systems from hard drives, CD-ROM drives, or USB media.

ESXi can boot from a disk larger than 2TB provided that the system firmware and the firmware on any addin card that you are using support it. See the vendor documentation.

USB key size for ESXi 5.0 embedded is vendor dependent.

VirtualLifestyle.nl

The virtualization blog by Joep Piscaer

Architecting IT

What goes on in a geek's head.

#vBrownBag

What goes on in a geek's head.

VMGuru

What goes on in a geek's head.

Virtual Hike

Discover the road to virtualization

Yellow-Bricks

by Duncan Epping

blog.scottlowe.org

The weblog of an IT pro specializing in virtualization, networking, cloud, servers, & Macs

Follow

Get every new post delivered to your Inbox.