Thursday, September 3, 2015

Which esxi host will be my HA master?


    Want to predict which host will become your high availability master?

    According to Duncan Epping's deep dive here

     The host that is participating in the election with the greatest number of connected datastores will be elected master. If two or more hosts have the same number of datastores connected, the one with the highest Managed Object Id will be chosen. This however is done lexically; meaning that 99 beats 100 as 9 is larger than 1

    So the host with the most datastores should win
    And if they have equal stores
    The host with the highest moid should win.


    Test A
    Here I have 4 hosts.
    Host 3 has 4 datastores every other host has 1
    Host3 should become the ha master

    Currently there is no HA master as it is off






    Turning on HA
    It installed on host 1 first so it came up first so it was the master.



    This is after it finished




    So the first host to have ha installed is the one that gets to be the master

    TestB
    What if we bump that one off?
    Will  #3 then become the master?








    Yes. As predicted #3 is the master


    TestC

    Now the other 3 hosts have the same number of datastores.
    So if we bump off 3 then when one will be the master?
    Lets look at the MOID
    Host 1 should be the master since it has a moid of 98.  remember lexically 94 > 103
    Host 3 moid 98 is already the master so it doesn’t count in our guess of who the master will be.






    Bump off #3

    Number 1 is the master



    Yes. As predicted #1 is the master
    -------------------------------------------------------------------------------
    As mentioned in an earlier post vSphere High Availability has been completely overhauled… This means some of the historical constraints have been lifted and that means you can / should / might need to change your design or implementation.
    What I want to discuss today is the changes around the Primary / Secondary node concept that was part of HA prior to vSphere 5.0. This concept basically limited you in certain ways… For those new to VMware /vSphere, in the past there was a limit of 5 primary nodes. As a primary node was a requirement to restart virtual machines you always wanted to have at least 1 primary node available. As you can imagine this added some constraints around your cluster design when it came to Blades environments or Geo-Dispersed clusters.
    vSphere 5.0 has completely lifted these constraints. Do you have a Blade Environment and want to run 32 hosts in a cluster? You can right now as the whole Primary/Secondary node concept has been deprecated. HA uses a new mechanism called the Master/Slave node concept. This concept is fairly straight forward. One of the nodes in your cluster becomes the Master and the rest become Slaves. I guess some of you will have the question “but what if this master node fails?”. Well it is very simple, when the master node fails an election process is initiated and one of the slave nodes will be promoted to master and pick up where the master left off. On top of that, lets take the example of a Geo-Dispersed cluster, when the cluster is split in two sites due to a link failure each “partition” will get its own master. This allows for workloads to be restarted even in a geographically dispersed cluster when the network has failed….
    What is this master responsible for? Well basically all the tasks that the primary nodes used to have like:
    • restarting failed virtual machines
    • exchanging state with vCenter
    • monitor the state of slaves
    As mentioned when a master fails a election process is initiated. The HA master election takes roughly 15 seconds. The election process is simple but robust. The host that is participating in the election with the greatest number of connected datastores will be elected master. If two or more hosts have the same number of datastores connected, the one with the highest Managed Object Id will be chosen. This however is done lexically; meaning that 99 beats 100 as 9 is larger than 1. That is a huge improvement compared to what is was like in 4.1 and prior isn’t it?
    ==========================================================================
    5. Can you brief what difference did you find between ESXi 4.1 and 5.0?

    VMware vSphere 4.1 HAVMware vSphere 5.0 HA
    It is called as Automated Availability Manager in this version.It is called as Fault Domain Manager in this version
    When we configure HA on vSphere 4.1 cluster, the first 5 hosts will be designated as Primary nodes, out of these 5 one node will act as “Master Primary” and which will handle restarts of VM’s in the event of a host failure. All the remaining hosts will join as Secondary Nodes.When we configure HA on vSphere 5.0 cluster, the first node will be elected as master and all other nodes will be configured slaves. Master node will be elected based on number of data stores it is connected to, and if all the hosts in cluster are connected to same number of data stores, host’s managed id will be taken into consideration. Host with highest managed id will be elected as master.
    Primary nodes maintain information about cluster settings and secondary node states. All these nodes exchange their heartbeat with each other to know the health status of other nodes.·         Primary nodes sends their heart beats to all other primary and secondary nodes.      Secondary nodes sends their heart beats to primaries only. Heart beats will be exchanged between all nodes every second.In case of a primary failure, other primary node will take the responsibility of restarts.If all primaries goes down at same point, no restarts will be initiated, in other words to initiate reboots at least one primary is required. Election of primary happens only during following scenarioso   When a host is disconnectedo   When a host is entered into maintenance modeo   When a host is not respondingo   And when cluster is reconfigured for HA.
    Link
    All hosts exchanges their heartbeats with each other to know about their health states.     Host Isolation response has been enhanced in this version, by introducing data store heart beating. Every host creates a hostname-hb file on the configured data stores and keeps it updated at specific interval. Two data stores will be selected for this purpose.   If we want to know who is master and who are slaves, just need to go to vCenter and click on Cluster Status from h
    =====================================================================
    Open the Web Browser and then Connect with this URL
    https://VCHostnameOrIPAddress/vod/index.html 
    Here authentication is required provide the credentials of vCenter administrator user here the click on the host status link

    Note:- MOID gets changed if host is removed from vCenter and then again added. 

VMware: ESXi Unattended Scripted Installation

ESXi installation is an easy job for one or two hosts, but imagine to repeat such installation for 40/50 hosts: it would take all day. To prevent such time-consuming situation VMware allow administrators to perform unattended ESXi installation.

Unattended installation is performed using a kickstart script that will be provided during boot. Kickstart script contains all parameters needed by ESXi to automatically complete installation process without further human intervention.

At first I suggest you to have a look at official documentation regarding ESXi 5.5 scripted installation:

Deploying ESXi 5.x using the Scripted Install feature (2004582) 
About Installation and Upgrade Scripts

Here's my kickstart file. I named it ks.cfg. You can use it as a base template and edit it according to your requirements.

 #  
 # Sample scripted installation file  
 #  
 # Accept EULA  
 vmaccepteula  
 # Set root password  
 rootpw mypassword  
 #Install on local disk overwriting any existing VMFS datastore  
 install --firstdisk --overwritevmfs  
 # Network configuration  
 network --bootproto=static --device=vmnic0 --ip=192.168.116.228 --netmask=255.255.255.0 --gateway=192.168.116.2 --nameserver=192.168.116.2 --hostname=esx1.testdomain.local --vlanid=100 --addvmportgroup=1  
 #Reboot after installation completed  
 reboot  

As you can see the code is already commented but let me spend a few words on:

install --firstdisk --overwritevmfs

This is used to install ESXi on first available local disk overwriting any existent VMFS partition.

While:

network --bootproto=static --device=vmnic0 --ip=192.168.116.228 --netmask=255.255.255.0 --gateway=192.168.116.2 --nameserver=192.168.116.2 --hostname=esx1.testdomain.local --vlanid=100 --addvmportgroup=1

Specifies that vmnic0 will be used for management and assigns to it IP address, netmask, gateway and vlan id.

--addvmportgroup=1 creates the VM Network portgroup to which virtual machines will be connected by default.

Let me now explain how to use this kickstart file during installation.

Boot your host with ESXi installation media attached (I use CDROM). During boot press SHIFT + O toEdit boot options. Weasel prompt will appear.

> runweasel

Basic command to use a network accessible (HTTP, HTTPS, NFS, FTP) kickstart file is:

> runweasel ks=<kickstart_file_location> ip=<ip_address_to_use_to_retrieve_ks> netmask=<netmask_to_use_to_retrieve_ks> gateway=<gateway_to_use_to_retrieve_ks> vlanid=<vlan_to_use_to_retrieve_ks>

kickstart script s location can be not just an HTTP(S) server. Even FTP, NFS, cdrom or usb are accepted in the form of:

ks=protocol://<serverpath>
ks=cdrom:/<path\>
ks=file://<path>
ks=usb:</path> 


In this example I retrieve kickstart file from a webserver (an HTTP location) and assign192.168.116.222 as IP address for host during installation process. 

> runweasel ks=http://192.168.116.1:8080/ks.cfg ip=192.168.116.222 netmask=255.255.255.0 gateway=192.168.116.2




Unattended installation will begin by parsing kickstart file.



When installation is completed host will reboot and ESXi will be ready to be used.



That's all!!
==================================================================

Unattended ESXi Installation from an USB Flash Drive

  1. Create a bootable ESXi Installer USB Flash Drive with Rufus (Howto)
  2. Navigate to the Flash Drive and open boot.cfg with an editor. Make sure to use an editor that can handle UNIX encoding (PSPad for example)
    edit-boot.cfg
  3. Replace kernelopt=runweasel with kernelopt=ks=usb:/ks.cfg
    boot-cfg-replace-kernelopts
  4. Create ks.cfg in the root directory of your Flash Drive
    create-esxi-kickstart-file
  5. Open ks.cfg with an editor. Creating complex kickstart is out of scope of this post. The only option you need to know is install. The firstdisk flag will install ESXi to the first device, with the following priority:
    1 – locally attached storage (local)
    2 – network storage (remote)
    3 – usb disks (usb)
    Be careful to not destory any data! Remove Shared LUNs and do not use this with Servers that contain data.Copy this to your ks.cfg file. This will install ESXi to the first local disk:
    vmaccepteula
    rootpw vmware
    install --firstdisk --overwritevmfs
    network --bootproto=dhcp --device=vmnic0
    reboot
    ks-cfg-content
  6. Save and Close
That’s it. You can plug in the USB Flash Drive to a Server, power it on and it will be installed automatically. The password for the root user is set to “vmware”.
Do you want to install plenty hosts to SD Cards or USB Flash Drives?Create the required amount of USB Flash Drives or SD Cards but replace the ks.cfg file with:
vmaccepteula
rootpw vmware
install --firstdisk=usb-storage --overwritevmfs
network --bootproto=dhcp --device=vmnic0
reboot
Plug it in and power the server on. It will boot from the media and do an unattended installation to the installation media. (It uses the first USB device – might be an issue when you have more than one USB storages connected.)
Of course, you can also write an own ks.cfg file to deploy servers with your own customized configuration.

ESXi5.0 Host Partition Layout

During the installation of ESXi 5.0, the system creates at least five partitions whose size and layout the user cannot control.
Bootpartition (4 MB):
This partition is needed for booting
Bootbank (250 MB):
The compressed boot image is saved on this FAT partition.
It will be extracted during the boot process and loaded into the system memory.
At the time of vSphere 4 this file had a size of about 70 MB, with vSphere 5 it is now grown to 250 MB.
AltBootbank (250 MB):
This partition is empty after a fresh install. Once you perform an update of ESXi, the current image is copied from the bootbank partition here.
This makes it possible to return to the last known good configuration by typing “Shift + R” while booting if there occures an error during the update of an ESXi host.
Dump/crash partition (110 MB):
In the case of a total crash of the host a dump file is written on this partition.
Store (285 MB):
On this partition the different ISO files for the VMWare Tools are available for all supported operating systems.
Scratch partition (4 GB):
This partition is only created if the installation media has at least 5 GB of space. It is used for the log files of the VMKernel.
If this partition is missing, the logs of the host are lost after a reboot or shutdown.
VMFS partition:
This partition is only created if the installation medium is not a flash memory.
It extends over the total available space of the medium and is formatted with VMFS 5.
You can use the command “ls /dev/disks/ -l” to display all the created partitions:

=====================================================================
n ESXi the partitions schema is automatically defined by the installation process and there is no way to modify it (you can only choose where install the hypervisor). There is a great post from Rickard Nobel (ESXi 5 partitions) that explain the structure of the partitions, their size and their purpose. But it does not explain how to get this information.
To see the partition layout in ESXi 5, the fdisk command will not work on new GTP disk (this type is used for all new disks and for disks that are extended to more that 2 GB, as written in the post about the upgrade of VMFS).
The new partedUtil command must be used instead. The step to show the partition table with this new command are:
  • First identify under /dev/disks the name of the system disk (it will be easy because usually is the only disk with more partitions:
  • Then use the get or the getptbl options to see the partition size (but note that the partition size was already visible also in the first screenshot):
In this example you can see there are 7 partitions:
  • Partition 1: systemPartition 4MB
  • Partition 5: linuxNative 250MB -> /bootbank
  • Partition 6: linuxNative 250MB -> /altbootbank
  • Partition 7: vmkDiagnostic 110MB
  • Partition 8: linuxNative 286MB -> /store
  • Partition 2: linuxNative 4GB -> /scratch (not present if you install ESXi on a small disk or flash)
  • Partition 3: VMFS datastore (not present if you install ESXi on a flash media)
As you can see, a minimal installation can be feet also in small flash memory with only 1 GB. A curios thing the that some partitions are reported as linuxNative but they contain a FAT filesystem (are reported also by the dfutility).
As written in the Richard’s, the footprint of ESXi si increased to around 124MB in the 5.0 version, compare to the 70MB of 4.1 and the original 32MB or 3.5 version.

=================================================================

ESXi 5 Partition Layout

I’ve recently gone through an exercise to configure the scratch partition on a number of boot from san hosts, which got me thinking about partition layout in general on ESXi hosts.
When you install ESXi you do not get the opportunity to modify the partition layout, its automatically set by the installation process.
If you’re curious you can view the partition layout on ESXi 5 hosts using partedUtil (for earlier versions of ESXi, fdisk is used instead). First we need to identify the disk that we will examine further with partedUtil. We can do this by listing the contents of /dev/disks:
The disk we are looking for is the one with multiple partitions, easily seen in blue in the above screen shot.
Now we have the disk, we can run partedUtil to examine the partition table on the disk:
Disregarding the 11GB VMFS partition and the 4GB scratch partition in the screenshot above, you can see that an installation of ESXi can take up as little as 1 GB.
So what are these partitions?
The first and smallest partition is purely used for booting the system and locating the hypervisor image which is located on one of the next two partitions.
The system/hypervisor image is found on the first 250 MB partition. The image is called s.v00 (124MB compressed) and is decompressed during boot.
The actual system image is located on the first 250 MB partition, formatted with plain old FAT. The image itself, s.v00, is a 124 MB compressed file, which is decompressed on boot and contains the hypervisor operating system.
The next partition is also used for the system image. This is a copy of the last working image. Empty on first install, when the server is upgraded, the previous system image is copied to this partition. It can be accessed during the boot process when you press CTRL-R.
The next partition is the 110 MB core dump partition, which stores the dump file if your host has the dreaded PSOD.
The 286 MB partition is where the ISO files for the VMware Tools are found.
In minimal installs the above partitions may be all that you see, however as seen in our screenshot above, there are a couple of others that you may see.
If you install ESXi on local storage with more than 4 or 5 GB free space then a Scratch partition will be automatically created. Various log files amongst other things will be redirected to this partition. If you didn’t have the necessary free space for this partition, then those files would be stored on a RAM drive, and would be lost in the event of a power failure. More about this shortly…
The final partition on the host in this example is the VMFS partition which was created during install, using available unallocated space. On this host this partition is 11GB.
So, with the partitions coverered, back to my original issue – the scratch partition. With an installation on small disks, USB or boot from SAN, you will find that a dedicated scratch partition won’t be created and will be served out of a ramdisk instead.
ESXi selects one of these scratch locations during startup in order of preference:
  1. The location configured in the /etc/vmware/locker.conf configuration file, set by the ScratchConfig.ConfiguredScratchLocation configuration option
  2. A Fat16 filesystem of at least 4 GB on the Local Boot device.
  3. A Fat16 filesystem of at least 4 GB on a Local device.
  4. A VMFS Datastore on a Local device, in a .locker/ directory.
  5. A ramdisk at /tmp/scratch/
It should be said that the scratch partition is necessary for several functions on the ESXi host – you can get an idea of what you will find on there by seeing the symbolic links that are created to redirect files and directories to scratch. For example, /var/log -> /scratch/var/log/
For hosts where a scratch partition hasn’t been automatically created, the following steps can be followed:
  1. Select the ESXi host in the inventory.
  2. Click the Configuration tab.
  3. Click Storage.
  4. Right-click a datastore and select Browse.
  5. Create a uniquely-named directory for this ESXi host (eg, .locker-ESXHostname)
  6. Close the Datastore Browser.
  7. Click Advanced Settings under Software.
  8. Select the ScratchConfig section.
  9. Change the ScratchConfig.ConfiguredScrathLocation configuration option, specifying the full path to the directory (this needs to be unique for every ESXi host). For example:/vmfs/volumes/DatastoreName/.locker-ESXHostname
  10. Enable the ScratchConfig.ConfiguredSwapState (recommended as this is a requirement for HA: see KB 1004177)
  11. Put the ESXi host in maintenance mode and reboot for the configuration change to take effect.
Link1:
Link2:
Link3:

Wednesday, September 2, 2015

Master and Slave Hosts

When you add a host to a vSphere HA cluster, an agent is uploaded to the host and configured to communicate with other agents in the cluster. Each host in the cluster functions as a master host or a slave host.
When vSphere HA is enabled for a cluster, all active hosts (those not in standby or maintenance mode, or not disconnected) participate in an election to choose the cluster's master host. The host that mounts the greatest number of datastores has an advantage in the election. Only one master host exists per cluster and all other hosts are slave hosts. If the master host fails, is shut down, or is removed from the cluster a new election is held.
The master host in a cluster has a number of responsibilities:
Monitoring the state of slave hosts. If a slave host fails or becomes unreachable, the master host identifies which virtual machines need to be restarted.
Monitoring the power state of all protected virtual machines. If one virtual machine fails, the master host ensures that it is restarted. Using a local placement engine, the master host also determines where the restart should be done.
Managing the lists of cluster hosts and protected virtual machines.
Acting as vCenter Server management interface to the cluster and reporting the cluster health state.
The slave hosts primarily contribute to the cluster by running virtual machines locally, monitoring their runtime states, and reporting state updates to the master host. A master host can also run and monitor virtual machines. Both slave hosts and master hosts implement the VM and Application Monitoring features.
One of the functions performed by the master host is virtual machine protection. When a virtual machine is protected, vSphere HA guarantees that it attempts to power it back on after a failure. A master host commits to protecting a virtual machine when it observes that the power state of the virtual machine changes from powered off to powered on in response to a user action. If a failover occurs, the master host must restart the virtual machines that are protected and for which it is responsible. This responsibility is assigned to the master host that has exclusively locked a system-defined file on the datastore that contains a virtual machine's configuration file.
vCenter Server reports whether a host is a master host or a slave host using a vSphere HA host state. This state is reported on the host's Summary tab in the vSphere Client and in the Host List view for a cluster or datacenter, if the HA State column has been enabled. An HA state of "Running (Master)" indicates the host is serving as a vSphere HA master host. A state of "Connected (Slave)" indicates the host is serving as a vSphere HA slave host. Several other states are provided to indicate when an election is underway or an error condition has occurred. The host'sSummary tab provides a link next to the vSphere HA state of the host that explains the current state.

Tuesday, September 1, 2015

vSphere and vCenter licensing and pricing

VMware vSphere 4 and vCenter Server is priced and licensed in various ways, depending on the size of the environment and the required functionality


VMware offers dozens of products, but at its core is vSphere, its virtualization platform, and vCenter Server, its management family. Understanding VMware's vSphere and vCenter licensing is critical to determining whether VMware virtualization is right for an organization

In total, IT organizations can choose between six VMware vSphere editions, or bundles, and one free product. The paid editions can be configured with a customer's choice of the full ESX hypervisor, or the "light" ESXi. The free edition is available only with ESXi.



Free VMware ESXi

With a zero-dollar price tag,VMware's free ESXi offering is a compelling option for organizations just trying virtualization. VMware's free ESXi can be easily downloaded directly from the VMware Web site, and users can readily upgrade to a vSphere edition as their needs dictate.
The hypervisor included in the VMWare free ESXi download is the same as that in paid versions of vSphere. The free ESXi is available for hosts with an unlimited number of processors of up to six cores, and for hosts with a maximum of 256 GB of RAM. There is no limit to the number of virtual machines that can run on a free ESXi host.
Free VMWare ESXi does, however, come with several restrictions. It is designated as single server, so administrators cannot use the VMware vSphere Client to manage more than one ESXi host at a time. The reason for this restriction is that ESXi does not include the vCenter Agent, and that its application programming interfaces (APIs) are read-only and cannot be executed against. The restriction also precludes third-party scripts from changing ESXi hypervisor settings.
Upgrading a free ESXi host to a vSphere license enables the vCenter Agent and unlocks the ESXi APIs, enabling management via management interfaces such as vCLI, vMA, PERL Toolkit, PowerShell Toolkit and others.
Support for free ESXi is available through self-service Web offerings per incident, or annually. Per-incident email and phone support is $299 per single incident, $749 for three incidents per year, and $1,149 for five incidents. Annual support is available starting at $249 per processor for Gold (business hours), or $298 for Platinum (24/7), with a minimum purchase of two processors.

VSphere Essentials for SMBs

For SMBs, VMware offers the Essentials and Essentials Plus bundles, for $495 and $3,495, respectively. Both bundles provide ESX or ESXi for up to three two-processor servers, where each processor may not have more than six cores. Groups of more than three hosts licensed with Essentials or Essentials Plus cannot be managed in the same vCenter cluster. Features of the Essentials bundle include a choice of ESX/ESXi, VMware vStorage Virtual Machine File System (VMFS), support for four vCPUs, the vCenter Server Agent, vStorage APIs/VMware Consolidated Backup (VCB), vCenter Update Manager, and vCenter for Essentials. The Essentials bundle includes a one-year subscription; support is offered on a per-incident basis.
The Essentials Plus bundle builds on those features and adds vMotion, VMware High Availability (HA) and VMware Data Recovery. Unlike Essentials, Essentials Plus requires the purchase of at least one year of support and subscription services (SnS), purchased separately

VSphere for the Enterprise

All VMware's enterprise vSphere editions are licensed per processor, where a processor can have either up to six or up to12 cores, depending on the edition. VMware places no restrictions on the number and kind of virtual machines (VMs) that can be hosted on a server, but it does require the purchase of at least one year of SnS per license.
VSphere Standard Edition includes a choice of ESX or ESXi, VMFS, four-way virtual SMP (vCPUs), the vCenter Server Agent, the vStorage APIs or VCB, vCenter Update Manager, vMotion, VMware HA, and vStorage Thin Provisioning. It is priced at $995 per processor and is available for hosts with up to six-core processors and up to 256 GB of RAM.
VSphere Advanced Edition builds on Standard Edition with the addition of VMotion, hot-add, Fault Tolerance, Data Recovery and vShield Zones. It is priced at $2,245 per processor, and can support systems with up to 12-core processors and 256 GB of RAM.
VSphere Enterprise Edition builds on Advanced Edition with the addition of Storage VMotion, Distributed Resource Scheduler (DRS), and Distributed Power Management (DPM). It is priced at $2,875 per processor and supports systems with up to six-core processors and up to 256 GB of RAM.
VSphere Enterprise Plus Edition includes all the features of the lesser editions, plus Host Profiles and the vNetwork Distributed Switch. It can be purchased for $3,495 per processor, with support for up to 12 cores per processor with no limit on RAM. For $3,995, it can also include the Cisco Nexus 1000V virtual switch.

Managing vSphere with vCenter Server

VCenter Server provides a centralized vSphere management console from which administrators can configure, provision, monitor, troubleshoot and update their virtual environment. It also a prerequisite for many other VMware and third-party management products and is thus a de facto requirement for most VMware environments.
There are three vCenter Server editions: the version included in the vSphere Essentials bundles, vCenter Server Foundation, which provides management for up to three servers for $1,495, and vCenter Server Standard, which is priced at $4,995 but does not impose limits on the number of hosts it can manage.
The following features are included across these three vCenter Server offerings for managing vSphere: a management server, a database server, a search engine, the vSphere Client, the Web Access portal, and vCenter APIs and a .NET extension to provide remote access and integration with other systems.
VCenter Server Standard adds two advanced features for managing vSphere: vCenter Server Linked Mode, for connecting multiple vCenter instances, and vCenter Server Orchestrator, for automating the environment.
As with vSphere, VMware imposes a minimum of one year of SnS on all vCenter licenses.
Link1:
================================================================
VMware vSphere can be purchased either through an all-in-one kit or a la carte editions. Kits are all-in-one offerings that deliver all the necessary licenses and features/functionality required to get virtualization running in an environment. Editions are a la carte offerings generally designed for larger or specific IT environments with a broader set of requirements.
Each edition includes a hypervisor as well as features to support basic server consolidation, improve availability, protect data, automate resource management and simplify management operations.
Prices listed are for licenses only; ProductionBasic Subscription and Support (SnS), and Incident Support (for vSphere Essentials Kit) are sold separately. Paid upgrades are available for all vSphere and vSphere with Operations Management offerings below to all higher kits or editions.

VMware vSphere Editions

A la carte licensing that is priced on a per CPU basis. All editions must be used in conjunction with an existing or separately purchased vCenter Server edition. SnS is required for at least one year.








VMware vSphere Remote Office Branch Office Editions

A la carte licensing is priced in packs of 25 VMs (Virtual Machines). Editions can be used in conjunction with an existing or separately purchased vCenter Server edition. SnS is required for at least one year. The 25 VM pack can be distributed across multiple sites. A maximum of a single 25 VM pack can be used in a single remote location or branch office.






VMware vSphere Essentials Kits

All-in-one solutions that combine virtualization for up to three physical servers (up to two processors each) along with the centralized management capabilities of vCenter Server for Essentials. See description below for specific SnS rules per kit.






VMware vCenter Server Editions

VMware vCenter Server provides unified management for vSphere environments and is a required component of a complete VMware vSphere deployment. One instance of vCenter Server is required to centrally manage virtual machines and their hosts and to enable all vSphere features. SnS is required for at least one year.




Link1:
Link2:VMware-vSphere-Pricing-Whitepaper