Thursday, October 23, 2014

How to Convert a VMware Virtual IDE disk to SCSI disk

Why we need to convert Virtual IDE disk to SCSI disk
When converting a physical machine to a virtual machine using VMware Converter, if an adapter type is not selected during the initial customization the resulting virtual machine may contain an IDE disk as the primary OS disk. However, using IDE virtual disks have following limitations:
1. IDE disks are slower than SCSI disks in general.
2. It is not possible to increase the size of IDE virtual disks. Where as SCSI based virtual disk can be re-sized.
3. Veeam Backup and Replication software working in Virtual Appliance mode cannot backup IDE based virtual disks. 
4. With few guest operating systems, if the primary disk is an IDE virtual disk, the newly converted virtual machine may fail to boot because the guest OS does not support the driver. 
5. In ESX 4.x the default disk type for Windows XP 32bit virtual machine creation is IDE. This default value can be manually changed during the virtual machine creation wizard by selecting the custom option. However, Windows XP 64bit will still use SCSI by default.
6. Virtual machines may fail to boot with only a black screen after P2V conversion.
How  to convert Virtual IDE disk to SCSI disk
Now let's see how to convert the virtual disk from IDE to SCSI. Thankfully, there are various methods to convert the IDE virtual disk to SCSI, but the two prominent ways are: 
1. During P2V conversion of a virtul machine using VMware Converter.
2. Editing the virtual machine's VMDK disk geometry.
Below is the procedure to edit the virtual machine's VMDK disk geometry to convert a IDE virtual disk to SCSI.
1. SSH to the ESXi Host running the virtual machine.
2. Navigate to the virtual machine's home folder. e.g. cd /vmfs/volumes/<datastore_name>/<vm_name>/
3. Open the primary disk (.vmdk) using the vi editor. Note: In ESXi, there is no nano editor.
4. Find the parameter ddb.adapterType = "ide"
5. Change the adapter type to ddb.adapterType = "lsilogic" or ddb.adapterType = "buslogic" depending on the guest operating system support.
6. Save the file in vi edition by hitting ESC :wq keys.
7. Now from VMware vSphere Client:
  • Click Edit Settings for the virtual machine.
  • Select the IDE virtual disk.
  • Choose to Remove from  virtual machine.
  • Click OK.
Caution: Make sure that you do not choose Remove from virtual machine and delete files from Disk 
8. From the Edit Settings menu for this virtual machine:
  • Click Add > Hard Disk > Use Existing Virtual Disk.
  • Navigate to the location of the disk and select to add it into the virtual machine.
  • Choose the same controller as in Step 3 as the adapter type. The SCSI ID should read SCSI 0:0.
Note: If a CDROM device exists in the virtual machine it may need to have the IDE channel adjusted from IDE 0:1 to IDE 0:0. If this option is greyed out, remove the CD-ROM from the virtual machine and add it back. This sets it to IDE 0:0.

Wednesday, October 22, 2014

Differences between ESXi 5.1 and ESXi 5.5

There are a lot of features in newer version of ESXi 5.5 compared ESXi 5.1 that want to grab your attention, making it difficult to make a decision on features alone, hopefully the information in this article will give you enough details on the difference between ESXi 5.1 and ESXi 5.5





This Post will explain you to understand the key differences between vSphere 5.1 and vSphere 5.5. vSphere 5.5 is introduced with lot of new and enhanced features along with increased configuration maximums. The below table helps you to compare the differences between vSphere 5.1 and vSphere 5.5 for various features and configuration maximums between the two versions of vSphere.

Features
vSphere 5.1
vSphere 5.5
Physical CPUs per host
160
320
Physical RAM per host
2  TB
4 TB
NUMA nodes per host
8
16
Maximum vCPUs per host
2048
4096
VMDK Size
2TB
62 TB
Max Size of Virtual RDM
2TB
62 TB
VM Hardware Version
9
10
40 GBps physical Adapter support
No
yes
ESXi Free version RAM limit
32 GB
 unlimited
ESXi Free version maximum vSMP
8-way virtual SMP
8-way virtual SMP
16 GB fibre channel End-to-End support
Support to run these HBAs at 16Gb. However, there is no support for full, end-to-end 16Gb connectivity from host to array.
Yes
APP HA
No
Yes
vFlash Read Cache support
No
Yes
VMware VSAN support
No
Yes
Expanded v-GPU and G-GPU support
only NVIDIA
NVIDIA, AMD and Intel GPU
vCenter Server Appliance With
 Embedded Database support upto
5 Hosts and 50 Virtual
 Machines
100 Hosts and 3000 Virtual
Machines
Microsoft Windows 2012 Cluster Support
No
Yes
PDL (Permanent Device Loss) AutoRemove
No
Intoduced in vSphere 5.5
Graphics acceleration support for Linux
Guest OS
No
Yes
Hot-pluggable SSDPCIe devices
No
Yes
Support for Reliable Memory Technology
No
Yes
CPU C-state Enhancement
Host power management leveraged only the performance state (P-state), which kept the
processor running at a lower frequency and voltage
Processor power state (C-state)
also is used, providing additional power savings
 and increased Performance
 LSI SAS support for Oracle Solaris 11 OS
No
Yes
vSphere Big Data Extensions
No
Yes
SATA-based virtual device nodes via
AHCI (Advanced Host Controller Interface) support
No
Yes (Support upto 120
devices per VM)
Improved LACP Support
one LACP group per
distributed switch
Supports up to 64
Multiple point-in-time replicas
vSphere Replication kept
only the most recent copy
of a virtual machine
Version 5.5 can keep up to
24 historical snapshots