ESXi has no service console which is a modified version of RHEL
ESXi is extremely thin hence results in fast installation + fast boot
ESXi can be purchased as an embedded hypervisor on hardware
ESXi has builtin server health status check
ESXi 4.1 vs ESXi 5.0 – Migration
Local upgrade from CD
VMware update manager (only supports upgrade of ESX/ESXi 4.x to ESXi 5.0)
ESXi 4.1 vs ESXi 5.0 – Features
vSphere Auto deploy
Storage DRS
HA – Primary/secondary concept changed to master/slave
Profile driven storage
VMFS version – 3 → 5
ESXi firewall
VMware hardware version – 7 → 8
VMware tools version – 4.1 → 5
vCPU – 8 → 32
vRAM – 256 → 1 TB
VMs per host – 320 → 512
RAM per host – 1TB → 2TB
USB 3.0 support
vApp
HA 5.0
Uses an agent called FDM – Fault domain manager
HA now talks directly to hostd instead of using vcenter agent vpxa
Master/slave concept
Master
monitors availability of hosts/VMs
manages VM restarts after host failure
maintains list of all VMs in each host
restarting failed VMs
exchanging state with vcenter
monitor state of slaves
Slave
monitor running VMs and send status to master and performs restart on request from master
monitors master node health
if master fails, participates in election
Two different heartbeat mechanisms – Network heartbeat and datastore heartbeat
Network heartbeat
Sends between slave and master per second
When slave is not receiving heartbeat from master, checks whether it is isolated or master is isolated or has failed
Datastore heartbeat
To distinct between isolation and failure
Uses ‘Power On’ file in datastore to determine isolation
This mechanism is used only when master loses network connectivity with hosts
2 datastores are chosen for this purpose
Isolation response
PowerOff
Leave Powered On
Shutdown
vMotion
vMotion enables live migration of running virtual machines from one host to another with zero downtime
Prerequisites
Host must be licensed for vMotion
Configure host with at least one vMotion n/w interface (vmkernel port group)
Shared storage (this has been compromised in 5.1)
Same VLAN and VLAN label
GigaBit ethernet network required between hosts
Processor compatibility between hosts
vMotion does not support migration of applications clustered using Microsoft clustering service
No CD ROM attached
No affinity is enabled
vmware tools should be installed
What is DRS? Types of DRS
Distributed Resource Scheduler
It is a feature of a cluster
DRS continuously monitors utilization across the hosts and moves virtual machines to balance the computing capacity
DRS uses vMotion for its functioning
Types of DRS
Fully automated – The VMs are moved across the hosts automatically. No admin intervention required.
Partially automated – The VMs are moved across the hosts automatically during the time of VM bootup. But once up, vCenter will provide DRS recommendations to admin and has to perform it manually.
Manual – Admin has to act according to the DRS recommendations
DRS prerequisites
Shared storage
Processor compatibility of hosts in the DRS cluster
vMotion prerequisites
vMotion is not working. What are the possible reasons?
Ensure vMotion is enabled on all ESX/ESXi hosts
Ensure that all vmware pre requisites are met
Verify if the ESXi/ESX host can be reconnected or if reconnecting the ESX/ESXi host resolves the issue
Verify that time is synchronized across environment
Verify that the required disk space is available
What happens if a host is taken to maintenance mode
Hosts are taken to maintenance mode during the course of maintenance
In a single ESX/ESXi setup, all the VMs need to be shutdown before getting into maintenance mode
In a vCenter setup If DRS is enabled, the VMs will be migrated to other hosts automatically.
How will you clone a VM in an ESXi without vCenter
Using vmkftools
Copy the vmdk file and attach to a new VM
Using VMware converter
What is vSAN?
It is a hypervisor-converged storage solution built by aggregating the local storage attached to the ESXi hosts managed by a vCenter.
Recommended iSCSI configuration?
A separate vSwitch, and a separate network other than VMtraffic network for iSCSI traffic. Dedicated physical NICs should be connected to vSwitch configured for iSCSI traffic.
What is iSCSI port binding ?
Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside in the same broadcast domain and IP subnet, to allow multiple paths to an iSCSI array that broadcasts a single IP address.
iSCSI port binding considerations ?
Array Target iSCSI ports must reside in the same broadcast domain and IP subnet as the VMkernel port.
All VMkernel ports used for iSCSI connectivity must reside in the same broadcast domain and IP subnet.
All VMkernel ports used for iSCSI connectivity must reside in the same vSwitch.
Currently, port binding does not support network routing.
Recommended iSCSI configuration of a 6 NIC infrastructure ? (Answer changes as per the infrastructure requirements)
2 NICs for VM traffic
2 NICs for iSCSI traffic
1 NIC for vMotion
1 NIC for management network
Post conversion steps in P2V
Adjust the virtual hardware settings as required
Remove non present device drivers
Remove all unnecessary devices such as serial ports, USB controllers, floppy drives etc..
Install VMware tools
Which esxtop metric will you use to confirm latency issue of storage ?
esxtop –> d –> DAVG
What are standby NICs
These adapters will only become Active if the defined Active adapters have failed.
Path selection policies in ESXi
Most Recently Used (MRU)
Fixed
Round Robin
Which networking features are recommended while using iSCSI traffic
iSCSI port binding
Jumbo Frames
Ports used by vCenter
80,443,902
What is ‘No Access’ role
Users assigned with the ‘No Access’ role for an object, cannot view or change the object in any way
When is a swap file created
When the guest OS is first installed in the VM
The active directory group, where the members will be ESXi administrators by default.
ESX Admins
Which is the command used in ESXi to manage and retrieve information from virtual machines ?
vmware-cmd
Which is the command used in ESXi to view live performance data?
esxtop
Command line tool used in ESXi to manage virtual disk files?
vmkfstools
Port used for vMotion
8000
Log file location of VMware host
\var\log\vmware
Can you map a single physical NIC to multiple virtual switches ?
No
Can you map a single virtual switch to multiple physical NICs?
Yes. This method is called NIC teaming.
VMKernel portgroup can be used for:
vMotion
Fault Tolerance Logging
Management traffic
Major difference between ESXi 5.1 and ESXi 5.5 free versions
Till ESXi 5.1 free version there was a limit to the maximum physical memory to 32 GB. But from 5.5 onwards this limit has been lifted.
Maximum number of LUNs that can be attached to a host (ESXi 5.0)
256
Maximum number of vCPUs that can be assigned to a VM (ESXi 5.0)
32
What is CPU affinity in VMware? Its impact on DRS?
CPU refers to a logical processor on a hyperthreaded system and refers to a core on a non-hyperthreaded system
By setting CPU affinity for each VM, you can restrict the assignment of VMs to a subset of available processors
The main use of setting CPU affinity is when there are display intensive workloads which requires additional threads with vCPUs.
DRS will not work with CPU affinity
VMversion 4 vs VMversion 7
Version 4
Runs on ESX 3.x
Max supported RAM 64 GB
Max vCPUs 4
MS cluster is not supported
4 NICs/VM
No USB Support
Version 7
Runs on vSphere 4.x
Max supported RAM 256 GB
Max vCPUs 8
MS cluster is supported
10 NICs/VM
USB support
What happens to the VMs if a standalone host is taken to maintenance mode?
In case of standalone servers , VMware recommends that VMs should be powered off before putting the server in maintenance mode
If we put the standalone host in maintenance mode without powering off the VMs, it will remain in the ‘entering maintenance mode’ state until the VMs are all shutdown
When all the VMs are powered down, the host status changes to ‘under maintenance’
How can you edit a vm template?
The VM templates cannot be modified as such
First , the VM template have to be converted to a virtual machine
After making necessary machines in the virtual machine, convert the virtual machine back to template