VMware

The Clean Exit: How to Safely Remove Storage Devices from ESXi | Lazy Admin Blog

Posted on Updated on

In the world of storage, “unpresenting” a LUN is more than just a right-click. If you don’t follow the proper decommissioning workflow, ESXi will keep trying to talk to a ghost device, leading to host instability and long boot times.

Follow this definitive checklist and procedure to ensure your environment stays clean and APD-free.

The “Safe-to-Remove” Checklist

Before you even touch the unmount button, verify these 7 critical points:

  1. Evacuate Data: Move or unregister all VMs, snapshots, templates, and ISO images from the datastore.
  2. HA Heartbeats: Ensure the datastore is NOT being used for vSphere HA heartbeats.
  3. No Clusters: Remove the datastore from any Datastore Clusters or Storage DRS management.
  4. Coredump: Confirm the LUN isn’t configured as a diagnostic coredump partition.
  5. SIOC: Disable Storage I/O Control (SIOC) for the datastore.
  6. RDMs: If the LUN is an Raw Device Mapping, remove the RDM from the VM settings (select “Delete from disk” to kill the mapping file).
  7. Scratch Location: Ensure the host isn’t using this LUN for its persistent scratch partition.

Pro Tip: Check Scratch Location via PowerCLI

Use this script to verify your scratch config across a cluster:

PowerShell
$cluster = "YourClusterName"
foreach ($esx in Get-Cluster $cluster | Get-VMHost) {
Get-VMHostAdvancedConfiguration -VMHost $esx -Name "ScratchConfig.ConfiguredScratchLocation"
}

Step 1: Identify your NAA ID

You need the unique Network Address Authority (NAA) ID to ensure you are pulling the right plug.

  • Via GUI: Check the Properties window of the datastore.
  • Via CLI: Run esxcli storage vmfs extent list

Step 2: The Unmount & Detach Workflow

1. Unmount the File System

In the Configuration tab > Storage, right-click the datastore and select Unmount. If you are doing this for multiple hosts, use the Datastores view (Ctrl+Shift+D) to unmount from the entire cluster at once.

2. Detach the Device (The Most Important Step)

Unmounting removes the “logical” access, but Detaching tells the kernel to stop looking for the “physical” device.

  • Switch to the Devices view.
  • Right-click the NAA ID and select Detach.
  • The state should now show as Unmounted.

Note: Detaching is a per-host operation. You must perform this on every host that has visibility to the LUN to avoid APD states.


Step 3: Cleanup the SAN & Host

Once the state is “Unmounted” across all hosts, you can safely unmap/unpresent the LUN from your SAN array.

Permanent Decommissioning

To prevent “ghost” entries from appearing in your detached list, run these commands on the host:

  1. List detached devices: esxcli storage core device detached list
  2. Remove the configuration permanently: esxcli storage core device detached remove -d <NAA_ID>

The Master List: VMware vCenter Release & Build Number History (Updated 2026) | Lazy Admin Blog

Posted on Updated on

Version tracking is the backbone of lifecycle management. Whether you are patching against the latest security vulnerability or verifying compatibility for a backup agent, you need the exact build number.

Below is the comprehensive history of vCenter Server, from the cutting-edge vSphere 9.0 down to the legacy VirtualCenter 2.5.

vCenter Server 9.0 Build Numbers (Latest)

vSphere 9.0 represents the latest shift toward AI-integrated infrastructure and cloud-native operations.

NameVersionRelease DateBuild Number
vCenter Server 9.0.2.09.0.201/20/202625148086
vCenter Server 9.0.1.09.0.109/29/202524957454
vCenter Server 9.0 GA9.0.006/17/202524755230

vCenter Server 8.0 Build Numbers

The 8.0 Update 3 branch is the current stable “workhorse” for most enterprise environments.

NameVersionRelease DateBuild Number
vCenter Server 8.0 Update 3i8.0.3.0080002/24/202625197330
vCenter Server 8.0 Update 3h8.0.3.0070012/15/202525092719
vCenter Server 8.0 Update 3g8.0.3.0060007/29/202524853646
vCenter Server 8.0 Update 3e8.0.3.0050004/11/202524674346
vCenter Server 8.0 Update 38.0.3.0000006/25/202424022515
vCenter Server 8.0 Update 28.0.2.0000009/21/202322385739
vCenter Server 8.0 Update 18.0.1.0000004/18/202321560480
vCenter Server 8.0 GA8.0.0.1000010/11/202220519528

vCenter Server 7.0 Build Numbers

Note: vCenter for Windows was officially removed starting with version 7.0.

NameVersionRelease DateBuild Number
vCenter Server 7.0 Update 3w7.0.3.0250009/29/202524927011
vCenter Server 7.0 Update 3l7.0.3.0140003/30/202321477706
vCenter Server 7.0 Update 27.0.2.0000003/09/202117694817
vCenter Server 7.0 GA7.0.0.1010004/02/202015952498

Legacy vCenter Server Build Numbers (vSphere 4.0 – 6.7)

NameVersionRelease DateBuild Number
vCenter Server 6.7 Update 3w6.7.0.5800010/28/202424337536
vCenter Server 6.5 Update 3w6.5.0.4300007/04/202424045034
vCenter Server 6.0 Update 16.0 U109/10/20153018524
vCenter Server 5.5 Update 35.5 U309/16/20153000241
vCenter Server 5.1 Update 35.1 U312/04/20142306353
vCenter Server 5.0 GA5.0 GA08/24/2011456005
vCenter Server 4.1 GA4.1 GA07/13/2010259021
vCenter Server 4.0 GA4.0 GA05/21/2009162856
VirtualCenter 2.5.0 GA2.5.012/10/200764192

Quick Tips for the Lazy Admin

  1. Check via VAMI: For 6.7 and newer, go to https://<vcenter-ip>:5480. The version and build are right on the login screen.
  2. Compatibility: Before upgrading vCenter, check the VMware Interoperability Matrix. Just because vCenter 9.0 is out doesn’t mean your older ESXi 6.7 hosts can talk to it!
  3. VCSA Migration: If you are still on version 6.5 or 6.7, your next step is a migration to the Appliance (VCSA). There is no “in-place” upgrade for Windows-based vCenter to 7.0+.

#VMware #vSphere9 #vCenter #SysAdmin #Virtualization #Datacenter #LazyAdmin #BuildNumbers #ITOps #PatchManagement

SRM Plugin Down? How to Generate Diagnostic Logs via the Command Line | Lazy Admin Blog

Posted on Updated on

In a high-pressure recovery situation, the last thing you want to see is the “SRM Connection Failed” error in your vSphere Client. If you can’t access the SRM interface to click “Gather Logs,” you have to go straight to the source.

Site Recovery Manager includes a standalone support script that packages all necessary diagnostics directly from the Windows Server filesystem, even if the SRM service itself is struggling.

Step 1: Locate the Support Script

Log into the Windows Server where SRM is installed and navigate to the \bin\ directory. The path varies slightly depending on your version and OS architecture:

  • 64-bit Windows (Standard): C:\Program Files\VMware\VMware vCenter Site Recovery Manager\bin\
  • 32-bit Windows (Legacy): C:\Program Files (32 bit)\VMware\VMware vCenter Site Recovery Manager\bin\
  • SRM 1.0 (Vintage): C:\Program Files\VMware\VMware Site Recovery Manager\bin\

Step 2: Generate the Bundle

  1. Look for the file named srm-support.wsf.
  2. Double-click the file to execute it.
  3. Wait a few moments. A compressed log bundle will appear on the Desktop of the current user, named in this format: srm-plugin-support-MM-DD-YYYY-hh-mm.zip.

Step 3: Label Your Logs (The “Pro” Tip)

VMware Support will often need logs from both the Protected and Recovery sites. Because the log bundles look identical, VMware highly recommends renaming the files before uploading them to the FTP portal:

  • protected-srm-support-MM-DD-YYYY.zip
  • recovery-srm-support-MM-DD-YYYY.zip

Don’t Forget the SRA Logs!

If your issue involves storage replication, VMware will also need the Storage Replication Adapter (SRA) logs. These are usually tucked away in vendor-specific folders:

  • ...\VMware vCenter Site Recovery Manager\scripts\SAN\<SRA Vendor Name>\log\
  • C:\Program Files\<SRA Vendor Name>\

Manual Configuration Check

If you need to verify your extension ID or database connection strings manually, you can find the core XML configuration files in the \config\ directory:

  • extension.xml
  • vmware-dr.xml

HPE Support 101: A Step-by-Step Guide to Raising ProLiant & Blade Server Cases | Lazy Admin Blog

Posted on Updated on

Is your ProLiant DL360 Gen11 throwing a pre-failure alert? Or perhaps a BL460c blade has gone “No Power” in the chassis? Since the segregation of HP into HP Inc. and Hewlett Packard Enterprise (HPE), the support portal has evolved.

If you are a “Lazy Admin,” you want to get your ticket in quickly so you can get back to what matters. Here is the streamlined approach to navigating HPE Support.

Step 1: Identify Your Product Family

Before heading to the portal, confirm which category your hardware falls into. HPE generally groups them as:

  • BL Series: ProLiant Server Blades (Enclosure-based)
  • DL Series: Rack-mount Servers (Multi-node and Standard)
  • ML Series: Tower Servers
  • MicroServer: For small office/home office environments
  • Apollo/Scalable Systems: High-density computing

Step 2: Use the Global Search

If you aren’t sure which specific “Packaged Cluster” or “Scalable System” you have, use the HPE Support Center search bar. Enter your specific model (e.g., ProLiant DL380 Gen10).

Why this page is useful: Before you even raise a case, the product page provides:

  • Top Issues & Solutions: Often, the fix for a known firmware bug is already listed here.
  • Drivers & Software: Essential for getting the latest SPP (Service Pack for ProLiant).
  • Warranty Check: Crucial! HPE will not open a case for hardware replacement if the serial number isn’t linked to an active contract or warranty.

Step 3: The HP Passport Login

To submit or manage cases, you must have an HPE Passport account. This is a single sign-on (SSO) service.

You can access the login page directly here: https://support.hpe.com/hpesc/public/home/

  • Tip: Link your company’s Support Agreement IDs (SAID) or Service Agreement IDs (SAR) to your Passport account before the server fails. It saves massive amounts of time during an actual emergency.

Step 4: Submitting the Case

Navigate to the Submit or manage support cases section. You will need:

  1. Serial Number (S/N): Found on the pull-out “luggage tag” on the front of the server or via iDRAC/iLO.
  2. Product Number (P/N): The specific hardware SKU.
  3. Logs: HPE will almost always ask for an Active Health System (AHS) log. Download this from the iLO before you start the ticket.

#HPE #ProLiant #ServerSupport #SysAdmin #DataCenter #ITOps #HPEPassport #Troubleshooting #LazyAdmin #EnterpriseIT

vSphere Ports & Connections: The Infrastructure Roadmap | Lazy Admin Blog

Posted on Updated on

In a locked-down enterprise environment, the “Any-to-Any” firewall rule is a myth. To manage ESXi effectively, you need to poke specific holes in your hardware and software firewalls.

The Core Management Ports

These are the “must-haves” for basic connectivity between vCenter, the vSphere Client, and the Host.

PortProtocolSourceDestinationPurpose
443TCPManagement WorkstationvCenter / ESXivSphere Client / SDK: The primary port for the Web Client and API access.
902TCP/UDPvCenter ServerESXi HostvCenter Agent (vpxa): vCenter uses this to send data to the host and receive heartbeats.
902TCPManagement WorkstationESXi HostVM Console: Required to open the “Remote Console” (MKS) to a virtual machine.
80TCPvCenter / WorkstationESXi HostHTTP: Used for redirecting to 443 and for some legacy file downloads.

Advanced Feature Ports

If you are using specific vSphere features like vMotion, HA, or specialized storage, you need these additional ports open:

1. vMotion (Live Migration)

  • 8000 (TCP): Required for vMotion traffic.
  • 2049 (TCP/UDP): If using NFS storage for the virtual disks.

2. vSphere High Availability (HA)

  • 8182 (TCP/UDP): Used by the Fault Domain Manager (FDM) agent for inter-host communication and election of the master host.

3. Provisioning & Deployment

  • 69 (UDP): TFTP, used for PXE booting ESXi for Auto Deploy.
  • 4012 (TCP): Used by the Auto Deploy service.

4. Troubleshooting & Monitoring

  • 22 (TCP): SSH access to the ESXi Shell.
  • 161 / 162 (UDP): SNMP polling and traps for hardware monitoring.

Troubleshooting “Host Disconnected”

If your host shows as “Not Responding” in vCenter, check these three things in order:

  1. Ping: Can the vCenter server ping the ESXi management IP?
  2. Port 902: From the vCenter server, try to telnet to the host on port 902 (telnet <host-ip> 902). If it fails, the heartbeat can’t get through.
  3. DNS: VMware is extremely sensitive to DNS. Ensure forward and reverse lookups work for both the vCenter and the Host.

Lazy Admin Tip 💡

Don’t memorize every port! Use the VMware Ports and Protocols Tool (the official online matrix). It allows you to select your source and destination products and generates a custom firewall rule list for you.

A high resolution pdf can be downloaded here Connections and Ports in ESX and ESXi

#VMware #vSphere #Networking #SysAdmin #Firewall #DataCenter #ESXi #ITOps #LazyAdmin #Connectivity

The Master List: VMware ESXi Release and Build Number History (Updated 2026) | Lazy Admin Blog

Posted on Updated on

Is your host up to date? Checking the “About” section in your vSphere Client is step one, but cross-referencing that number against this list is how you confirm if you’re on a General Availability (GA) release, an Update, or an Express Patch.

vSphere ESXi 9.0 (Latest)

The new generation of the hypervisor, optimized for AI workloads and DPUs.

NameVersionRelease DateBuild Number
VMware ESXi 9.0.29.0.22026-01-2025148080
VMware ESXi 9.0.19.0.12025-09-2924957450
VMware ESXi 9.0 GA9.0 GA2025-06-1724755225

vSphere ESXi 8.0

The enterprise workhorse for 2024-2026.

NameVersionRelease DateBuild Number
VMware ESXi 8.0 Update 38.0 U32024-06-2524022510
VMware ESXi 8.0 Update 28.0 U22023-09-2122380479
VMware ESXi 8.0 Update 18.0 U12023-04-1821495797
VMware ESXi 8.0 GA8.0 GA2022-10-1120513097

vSphere ESXi 7.0

Note: This version introduced the new Lifecycle Manager (vLCM).

NameVersionRelease DateBuild Number
VMware ESXi 7.0 Update 3w7.0 U3w2025-09-2924927030
VMware ESXi 7.0 Update 37.0 U32021-10-0518644231
VMware ESXi 7.0 GA7.0 GA2020-04-0215843807

vSphere ESXi 6.x Legacy (Archive)

NameVersionRelease DateBuild Number
VMware ESXi 6.7 Update 36.7 U32019-08-2014320388
VMware ESXi 6.5 Update 36.5 U32019-07-0213932383
VMware ESXi 6.0 Update 1a6.0 U1a2015-10-063073146
VMware ESXi 6.0 GA6.0 GA2015-03-122494585

How to Verify Your Build Number

If you aren’t at your desk and only have SSH access to the host, you can find your build number instantly with this command:

vmware -v

Example Output:

VMware ESXi 8.0.0 build-20513097

Lazy Admin Tip 💡

Always remember the vCenter Interoperability Rule: Your vCenter Server must always be at a build version equal to or higher than your ESXi hosts. If you patch your hosts to vSphere 9.0 while vCenter is still on 8.0, your hosts will show as “Not Responding” or “Disconnected.”

#VMware #vSphere9 #ESXi #SysAdmin #Virtualization #PatchManagement #DataCenter #LazyAdmin #BuildNumbers #ITOperations

Emergency Log Collection: Generating and Uploading ESXi Support Bundles | Lazy Admin Blog

Posted on Updated on

If you can’t generate a support bundle through vCenter, your best bet is the ESXi Shell. By running vm-support directly on the host, you bypass the management overhead and get your diagnostics faster.

Step 1: Generate Logs via SSH (CLI)

Before running the command, identify a datastore with at least 5-10GB of free space to store the compressed bundle.

  1. SSH into your ESXi host using Putty.
  2. Navigate to your chosen datastore: cd /vmfs/volumes/YOUR_DATASTORE_NAME/
  3. Run the support command and redirect the output to a specific file name:Bashvm-support -s > vm-support-HostName-$(date +%Y%m%d).tgz
    • -s stands for “stream,” directing the output to the file you specified.
    • Tip: Using $(date +%Y%m%d) automatically adds the current date to the filename.
  4. Once finished, use the vSphere Datastore Browser to download the .tgz file to your local workstation.

Step 2: Uploading to VMware via FileZilla

VMware provides a public FTP/SFTP landing zone for Support Requests (SR). While many admins use the browser, a dedicated client like FileZilla is much more reliable for large multi-gigabyte bundles.

Configure FileZilla for VMware

  1. Set Transfer Mode: Go to Transfer > Transfer type > Binary. This prevents file corruption during the upload.
  2. Open Site Manager: (File > Site Manager) and create a new site:
    • Host: ftpsite.vmware.com
    • Protocol: FTP (or SFTP if requested by support)
    • Logon Type: Normal
    • User: inbound
    • Password: inbound

Navigating the Remote Site

  1. Connect to the server.
  2. Create your SR Folder: In the “Remote Site” pane, right-click and select Create Directory. Name it exactly after your 10-digit Support Request number (e.g., 2612345678).
  3. Upload: Locate your .tgz bundle in the left pane (Local Site), right-click it, and select Upload.

Important Note: For security, the VMware FTP is “blind.” You will not see your files or folders once they are created/uploaded. Don’t panic if the directory looks empty after the transfer completes; as long as the transfer queue shows 100%, VMware has it.

#VMware #ESXi #Troubleshooting #SysAdmin #DataCenter #Virtualization #ITOps #FileZilla #LazyAdmin #TechTips

Nuclear Option: How to Force Power Off a Hung VM via SSH | Lazy Admin Blog

Posted on Updated on

We’ve all been there: a Windows Update goes sideways or a database lock freezes a guest OS, and suddenly the “Shut Down Guest” command is greyed out or simply times out. When the GUI fails you, the ESXi Command Line (esxcli) is your best friend.

Step 1: Identify the “World ID”

In ESXi terminology, every running process is assigned a World ID. To kill a VM, you first need to find this unique identifier.

  1. SSH into your ESXi host using Putty.
  2. Run the following command to see all active VM processes:Bashesxcli vm process list
  3. Locate your hung VM in the list. Look for the World ID (a long string of numbers). You will also see the Display Name and the path to the .vmx file to confirm you have the right one.

Step 2: Execute the Kill Command

ESXi offers three levels of “force” to stop a process. It is best practice to try them in order:

  1. Soft: The most graceful. It attempts to give the guest OS a chance to shut down cleanly.
  2. Hard: Equivalent to pulling the power cable. Immediate cessation of the VMX process.
  3. Force: The “last resort.” Use this only if ‘Hard’ fails to clear the process from the kernel.

The Syntax:

Bash

esxcli vm process kill --type=[soft,hard,force] --world-id=WorldNumber

Example (Hard Kill): esxcli vm process kill -t hard -w 5241852


Step 3: Verify the Result

After running the kill command, it may take a few seconds for the host to clean up the memory registration. Run the list command again to ensure it’s gone:

Bash

esxcli vm process list | grep "Your_VM_Name"

If the command returns nothing, the VM is officially offline, and you can attempt to power it back on via the vSphere Client.

Lazy Admin Tip 💡

If esxcli still won’t kill the VM, the process might be stuck in an “I/O Wait” state (usually due to a failed storage path). In that rare case, you might actually need to restart the Management Agents (services.sh restart) or, in extreme cases, reboot the entire host.

#VMware #vSphere #ESXi #SysAdmin #Troubleshooting #Virtualization #ITOps #LazyAdmin #ServerManagement #DataCenter

Troubleshooting VMware Tools Upgrade Failures on Windows Server 2003 | Lazy Admin Blog

Posted on Updated on

In the world of legacy infrastructure, Windows Server 2003 virtual machines (VMs) occasionally hit a “brick wall” during VMware Tools upgrades. While VMware continues to investigate the root cause, the community has identified a manual “scrubbing” process to bypass the installer errors and force a clean installation.


🛑 Pre-Requisites & Data Collection

Before performing a manual registry cleanup, VMware Support recommends gathering the following data to help identify the underlying issue:

  1. Version Mapping: Note the current “from” version and the target “to” version.
  2. Upgrade Method: Are you using the “Interactive” installer, “Silent” switches, or vCenter’s “Automatic” update?
  3. Historical Data: Open the Windows Event Viewer, search for Event Source: MsiInstaller, and look for Event ID: 1034 to find traces of previous installation attempts.

🛠️ The Fix: Manual Registry & System Scrubbing

[!CAUTION] Warning: This procedure involves modifying the Windows Registry. Incorrect changes can destabilize your OS. Always take a full VM Snapshot and a Registry Backup before proceeding.

1. Registry Cleanup (Installer Keys)

Log in as an Administrator, open regedit, and navigate to/delete the following keys if they exist:

  • HKEY_CLASSES_ROOT\Installer\Features\05014B32081E884E91FB41199E24004
  • HKEY_CLASSES_ROOT\Installer\Products\05014B32081E884E91FB41199E24004
  • HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Features\05014B32081E884E91FB41199E24004
  • HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products\05014B32081E884E91FB41199E24004
  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components\B150AC107B12D11A9DD0006794C4E25
  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{3B410500-1802-488E-9EF1-4B11992E0440}
  • HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.

2. Service Removal

Deep-seated services can block the new installer. Delete these keys under CurrentControlSet\Services:

  • VMTools
  • VMUpgradeHelper
  • VMware Physical Disk Helper Service
  • vmvss

3. File System Cleanup

Once the registry is clear, you must remove the physical binary remnants:

  1. Open Windows Explorer.
  2. Delete the folder: %ProgramFiles%\VMware\VMware Tools.
  3. Restart the Virtual Machine. This step is non-negotiable as it clears the memory and releases hooks on drivers.

🚀 Final Step: Fresh Installation

After the reboot, the system will be “clean” of previous VMware Tools traces. You can now mount the VMware Tools ISO through your vSphere client and run a fresh installation.

Pro-Tip: If the VM has other VMware products installed (like vCenter Server), do not do a blanket search-and-destroy for the term “VMware” in the registry. Stick strictly to the keys listed above to avoid breaking other applications.

Troubleshooting vSphere Client Timeouts: “The remote server took too long to respond” | Lazy Admin Blog

Posted on Updated on

If you are seeing the error “The request failed because the remote server [vCenter Name/IP] took too long to respond” specifically when checking the Storage View of a VM or Datacenter, you are likely hitting a timeout related to Single Sign-On (SSO) authentication latency.


🛠️ The Quick Fix: Manual Login

The most common trigger for this timeout is using the “Use Windows Session Credentials” checkbox during login. While convenient, this pass-through method often fails to communicate efficiently with SSO when complex domain trusts are involved.

To solve this immediately:

  1. Log out of the vSphere client.
  2. Manually type your username (e.g., domain\user or user@domain.com) and password.
  3. Do not check the “Use Windows Session Credentials” box.

🏗️ The Permanent Fix: Identity Source Configuration

If manual login works but you want to restore the functionality of session credentials, the issue lies in how vCenter communicates with your external domains.

1. Adding External Domains

Ensure your Active Directory or LDAP identity sources are correctly configured. Refer to VMware KB 2035510 for the specific procedure on adding external domains to the SSO identity sources.

2. Default Domain Order

Even if a domain is added, if it is low on the priority list, the search request may time out before finding the user.

  • Log into the vSphere Web Client as an SSO Administrator (administrator@vsphere.local).
  • Navigate to Administration > Single Sign-On > Configuration.
  • Go to the Identity Sources tab.
  • Add your trusted domains to the Default Domains list.
  • Reorder the domains: Move your most frequently used production domain to the top of the list.
  • Save the configuration.