How to get Serial number and System information of ESXi host remotely using putty

Posted on Updated on

🛠️ Method 1: Using esxcfg-info

The esxcfg-info command is a comprehensive tool that dumps a massive amount of data regarding the host’s configuration. Filtering this with grep is the quickest way to find your serial number.

Command:

Bash

esxcfg-info | grep "Serial Number"
  • What it does: Searches the entire configuration dump for the specific “Serial Number” string.
  • LazyAdmin Tip: If you get too many results, try esxcfg-info -w | grep "Serial Number" to focus specifically on hardware information.

🛠️ Method 2: Using dmidecode (DMI Table Decoder)

If you need more than just the serial number—such as the Manufacturer, Product Name (Model), and UUID—dmidecode is the standard tool. It retrieves data directly from the system’s Desktop Management Interface (DMI) table.

Command:

Bash

/usr/sbin/dmidecode | grep -A4 "System Information"
  • What it does: The -A4 flag tells grep to show the 4 lines after the match.
  • The Result: You will typically see:
    1. Manufacturer (e.g., Dell Inc., HP, Cisco)
    2. Product Name (e.g., PowerEdge R740)
    3. Version
    4. Serial Number
    5. UUID

🛠️ Method 3: The Modern ESXCLI Way

If you are on ESXi 6.x or 7.x/8.x, VMware has standardized most commands under the esxcli framework. This is often faster and cleaner than the legacy scripts.

Command:

Bash

esxcli hardware platform get
  • Why use this? It provides a clean, organized output of the Vendor Name, Product Name, and Serial Number without needing to grep.

⚠️ Troubleshooting Access

  1. SSH is Disabled: By default, SSH is turned off for security. You must enable it via the DCUI (the yellow and grey monitor screen) under “Troubleshooting Options” or via the Host Client web interface.
  2. Permission Denied: Ensure you are logging in as root. Standard users generally do not have permission to query the hardware DMI tables.
  3. Shell Lockdown: If the host is in “Lockdown Mode,” you will be unable to SSH in even with the correct credentials. You’ll need to disable Lockdown Mode via vCenter first.

#VMware #ESXi #SysAdmin #ITPro #CommandLine #ServerHardware #TechTips #LazyAdmin #DataCenter #RemoteManagement #vSphere

ESXi Multipathing Decoded: MRU, Fixed, and Round Robin

Posted on Updated on

When you present a LUN to an ESXi host, the Native Multipathing (NMP) engine automatically assigns a policy based on the type of storage array detected. However, as an admin, you need to understand why a policy was chosen—and when you should manually intervene.

1. Most Recently Used (MRU)

Best For: Active/Passive Arrays. MRU selects the first working path it finds at boot. If that path fails, it switches to a standby path.

  • Key Behavior: It does not fail back. Even if the original path becomes healthy again, the host stays on the current path. This prevents “path thrashing” on Active/Passive arrays where switching controllers is an expensive operation.

2. Fixed

Best For: Active/Active Arrays. The Fixed policy uses a specific “Preferred Path.” If the preferred path fails, it moves to an alternative.

  • Key Behavior: It does fail back. As soon as that designated preferred path is back online, the host immediately switches back to it.

3. Round Robin (RR)

Best For: Load Balancing (Active/Active or ALUA). Round Robin rotates through all available paths to distribute the I/O load.

  • Active/Active: Uses every available path.
  • Active/Passive: Only uses all paths leading to the active controller.

Note: For Microsoft Failover Clusters (MSCS), Round Robin is only supported on ESXi 5.5 and later.

4. Fixed with Array Preference (FIXED_AP)

Introduced in ESXi 4.1 for ALUA-capable arrays, this policy lets the storage array tell the host which path is the “optimal” one.

  • Note: This was removed in ESXi 5.0 in favor of letting the NMP automatically select MRU or Fixed based on the array’s ALUA response.

⚠️ Critical Warnings for Admins

  1. Don’t Fight the NMP: VMware generally warns against manually changing a LUN from Fixed to MRU. The host chooses the policy based on the hardware it detects; forcing a change can lead to instability.
  2. Verify Vendor Support: Round Robin is powerful but not supported by every array. Always check the VMware Compatibility Guide before making it your default.
  3. MSCS Limitations: If you are virtualizing SQL clusters or other failover clusters, double-check your ESXi version before toggling Round Robin, or you risk losing disk heartbeat connectivity.

#VMware #ESXi #StorageAdmin #vSphere #Multipathing #SysAdmin #ITPro #Virtualization #LazyAdmin #DataCenter #StorageTips

Dell ExtPart: The “Magic” Utility for Legacy Partition Expansion | Lazy Admin Blog

Posted on Updated on

If you’ve ever tried to expand a boot partition on an older Windows box (like Server 2003 or 2008) and found the “Extend Volume” option greyed out, you know the frustration. Enter the Dell ExtPart Utility.

This tiny 36KB tool allows for online volume expansion—meaning you can grow your NTFS partition without a reboot.

⚠️ The “Cloud” Warning

Before we dive in, a massive disclaimer: Do NOT use this in a Cloud/Virtual infrastructure (Azure, AWS, or even modern ESXi/Hyper-V). Modern hypervisors and cloud platforms use virtual disk drivers that can become corrupted if a legacy tool like ExtPart tries to manipulate the partition table directly. Use the native Disk Management or PowerShell tools instead.

How to use ExtPart.exe

  1. Download and Extract: It’s a self-extracting archive. Run it and extract extpart.exe to a folder (e.g., C:\extpart).
  2. Open Command Prompt: Run CMD as an Administrator.
  3. Run the Command: Navigate to your folder and use the following syntax: extpart [drive_letter]: [size_to_add_in_mb]

Example: To add 10GB (10240MB) to your C: drive, you would type: extpart c: 10240

Key Specs:

  • File Name: ExtPart.exe
  • Size: 36KB
  • Requirement: NTFS formatted basic disks.
  • Reboot required? No.

Official Download Link:

Installation Quick-Steps:

  1. Click Download File on the Dell page.
  2. Run the ExtPart.exe you just downloaded. It is a self-extractor.
  3. By default, it extracts to C:\dell\ExtPart.
  4. Navigate to that folder to find the actual extpart.exe utility you’ll use in the Command Prompt.

🏗️ CLI Command Hierarchy & Navigation

Posted on Updated on

The CLI is organized like a file system. You move “down” into specific modes to manage objects and “up” to return to the global level.

  • EXEC Mode (#): The top-level mode. From here, you can access all other sub-modes.
  • Navigation Commands:
    • scope <object>: Moves into a sub-mode for an existing object (e.g., scope chassis 1).
    • enter <object>: Similar to scope, but used to enter or create an object’s mode.
    • exit: Moves up one level in the hierarchy.
    • top: Jumps immediately back to the EXEC mode prompt.

🛠️ Common Management Commands

TargetCommandPurpose
Chassisshow chassis [inventory/status/psu]View physical chassis health and components.
Serversshow server [inventory/cpu/memory/status]Audit blade or rack-mount hardware specs.
Fabricshow fabric-interconnect [a/b] [inventory]Check the state of your Fabric Interconnects.
Faultsshow fault [detail/severity]List active system alarms and errors.
Logsshow sel [chassis-id/blade-id]View the System Event Log for specific hardware.

💾 The Transactional Model (Commit Buffer)

Unlike many traditional CLIs, UCS Manager uses a transactional model. When you make a configuration change (like set or enable), the change is stored in a temporary buffer and is not live until you explicitly save it.

  1. Modify: set addr 192.168.1.50
  2. Verify: show configuration pending (Optional)
  3. Apply: commit-buffer
  4. Discard: discard-buffer (If you made a mistake)

#CiscoUCS #CommandLine #SysAdmin #DataCenter #Networking #Cisco #ITPro #LazyAdmin #TechTutorials #UCSM

Demystifying Cisco UCS Monitoring: Manager vs. Standalone C-Series

Posted on Updated on

Whether you are managing a massive farm of B-Series blades or a handful of standalone C-Series rack servers, Cisco UCS provides a sophisticated, stateful monitoring architecture. Understanding how this “Queen Bee” and “Worker Bee” relationship works is the key to reducing alert fatigue and maintaining 100% uptime.

🏗️ The Architecture: DME and Application Gateways

The core of UCS monitoring relies on three primary components that translate raw hardware signals into human-readable data.

1. Data Management Engine (DME)

Think of the DME as the Queen Bee. It is the central brain that maintains the UCS XML Database. This database is the “Single Source of Truth” for your entire domain, housing inventory details, logical configurations (pools/policies), and current health states.

2. Application Gateways (AG)

The AGs are the Worker Bees. These are software agents that communicate directly with hardware endpoints (blades, chassis, I/O modules). They monitor health via the CIMC (Cisco Integrated Management Controller) and feed that data back to the DME in near real-time.

3. Northbound Interfaces

These are your outputs. You have Read-Only interfaces like SNMP and Syslog for external monitoring, and the XML API which is a Read-Write interface, allowing you to both monitor health and push configuration changes.


🚨 The Fault Lifecycle: Managing “State”

Cisco UCS doesn’t just send “fire and forget” alerts. It uses a stateful fault model. Faults are objects that transition through a lifecycle to prevent “flapping”—where a minor glitch sends dozens of emails in a minute.

  • Active: The problem is occurring now.
  • Soaking: The issue cleared quickly, but the system is waiting to see if it reoccurs before notifying you.
  • Flapping: The fault is clearing and reoccurring in rapid succession.
  • Cleared: The issue is fixed, but the record is retained briefly for your attention.
  • Deleted: The fault is finally purged once the retention interval expires.

✅ Best Practices for the “Lazy Admin”

1. Filter out FSM Faults

In UCS Manager, Finite State Machine (FSM) faults are almost always transient. They occur during a task transition—like a server taking a bit too long to finish BIOS POST during a profile association.

The Rule: Focus your alerting on Major and Critical severities that are NOT of type FSM. This will eliminate about 80% of your monitoring “noise.”

2. Leverage Consistency

One of the best features of the UCS ecosystem is that Standalone C-Series and UCS Manager use the same MIBs and Fault IDs. If you have an NMS (Network Management System) set up for your blades, adding standalone rack servers is seamless because the data structure is identical.

3. Use Fault Suppression

Doing maintenance? Don’t let your monitoring system scream at you. Use the Fault Suppression feature (added in UCSM 2.1) to silence alerts on a specific blade or rack server while you are working on it.

4. The XML API Advantage

For standalone C-Series servers, the XML API is the preferred monitoring method. It supports Event Subscription, which proactively “pushes” alerts to your management tool rather than forcing the tool to “pull” or poll for data constantly.

CiscoUCS #SysAdmin #DataCenter #Networking #Cisco #ITPro #ServerMonitoring #LazyAdmin #Automation #TechTips

🏗️ The Architecture: How UCS Manager “Thinks”

Posted on Updated on

For B-Series (blade) and integrated C-Series (rack) servers, monitoring is driven by a “Queen Bee and Worker Bee” relationship.

1. Data Management Engine (DME)

The DME is the brain of the system. It maintains the UCS XML database, which stores the current inventory, health, and configuration of every physical and logical component in your domain.

  • Real-Time Only: By default, the DME only shows active faults. It does not store a historical log of everything that ever went wrong.

2. Application Gateway (AG)

The AGs are the “worker bees.” They communicate directly with endpoints (servers, chassis, I/O modules) to report status back to the DME.

  • Server Monitoring: AGs monitor health via the CIMC (Cisco Integrated Management Controller) using IPMI and SEL logs.

3. Northbound Interfaces

These are the “outputs” that you, the administrator, actually interact with:

  • SNMP & Syslog: Read-only interfaces used for external monitoring tools.
  • XML API: A powerful “read-write” interface used for both monitoring and changing configurations.

🚨 Understanding Faults and Their Lifecycle

In Cisco UCS, a fault is a “stateful” object. It doesn’t just appear and disappear; it transitions through a specific lifecycle to prevent “alert fatigue” caused by temporary glitches.

The Fault Lifecycle

  1. Active: The condition occurs, and a fault is raised.
  2. Soaking: The condition clears quickly, but the system waits (the flap interval) to see if it comes back.
  3. Flapping: The fault is raised and cleared several times in rapid succession.
  4. Cleared: The issue is resolved, but the fault remains visible for a “retention interval” so you don’t miss it.
  5. Deleted: The fault is purged from the database.

✅ Best Practices for Monitoring

1. The “Severity” Rule

For UCS Manager, your monitoring tool should focus on faults with a severity of Critical or Major. Ignore “Info” or “Condition” alerts unless you are deep-diving into a specific issue.

2. Filter out “FSM” Faults

Finite State Machine (FSM) faults are usually transient. They often trigger during a task (like a BIOS POST during a service profile association) and resolve themselves on a second or third retry.

  • Note: This only applies to UCS Manager. Standalone C-Series servers do not use FSM, so all their faults are usually relevant.

3. Use the XML API for C-Series

If you are managing standalone C-Series servers, the XML API is the gold standard. It supports Event Subscription, which pushes proactive alerts to you rather than making your tool “pull” data constantly.


📚 Essential Resource Links

Keep these bookmarked for when those cryptic SNMP OIDs start popping up in your logs:

#CiscoUCS #SysAdmin #DataCenter #Networking #Cisco #ITPro #ServerMonitoring #LazyAdmin #Virtualization #TechTutorials

The “No-Install” Hack: Enable Disk Cleanup on Server 2008 R2

Posted on Updated on

Need to free up space right now but can’t afford a reboot or a feature installation? Windows Server 2008 and 2008 R2 actually have the Disk Cleanup files hidden inside the System Component Store (WinSxS). You just have to move them to the right place.

The Manual “Copy-Paste” Method

By manually placing these two files into your System32 directory, you enable the cleanmgr command immediately.

1. Locate the Files

Search your C:\Windows\WinSxS directory for the following two files. Note: The long folder names may vary slightly based on your service pack level, so use the search bar if needed.

For Windows Server 2008 R2 (64-bit):

  • The Executable: amd64_microsoft-windows-cleanmgr_..._cleanmgr.exe
  • The Language Resource: amd64_microsoft-windows-cleanmgr.resources_..._en-us_...\cleanmgr.exe.mui

2. Move to System32

Copy (don’t move, just in case) the files to these specific destinations:

  1. cleanmgr.exe%systemroot%\System32
  2. cleanmgr.exe.mui%systemroot%\System32\en-US

3. Run the Tool

You don’t need to register anything. Simply open a Command Prompt or the Run dialog (Win+R) and type: cleanmgr.exe

Why do it this way?

  • Zero Downtime: No reboots, no “Configuring Windows” screens.
  • Lightweight: You don’t pull in the rest of the “Desktop Experience” (like Media Player or desktop themes) that just adds more bloat to a server.
  • Reliable: You are using the exact binaries Microsoft built for that specific OS version.

#WindowsServer #SysAdmin #ITPro #TechHacks #ServerMaintenance #DiskCleanup #LazyAdmin #Troubleshooting #WindowsAdmin #ZeroDowntime

Dcdiag Overview: The Essential Domain Controller Diagnostic Tool

Posted on Updated on

If you suspect issues with Active Directory—whether it’s slow logins, replication failures, or DNS errors—the first command you should run is Dcdiag. This command-line tool analyzes the state of your Domain Controllers (DCs) across a forest or enterprise and provides a detailed report of abnormal behavior.

Why use Dcdiag?

In a Windows environment, all DCs are peers. Any DC can update the directory, and those changes must replicate to all other peers. If the replication topology is broken or the DC Locator service has inaccurate DNS information, your environment will quickly fall out of sync.

Dcdiag identifies these “silent” failures before they become major outages.


Key Functional Areas Tested

Dcdiag doesn’t just run one check; it executes a series of specialized tests:

  • Connectivity: Verifies if DCs are reachable and have the necessary services running.
  • Replication: Checks for latent or failed replication links between peers.
  • Topology: Ensures the Knowledge Consistency Checker (KCC) has built a valid path for data to travel.
  • Advertising: Confirms the DC is properly announcing its roles (Global Catalog, KDC, etc.) so clients can find it.
  • DNS: Validates that the necessary resource records are present in DNS.

How to Run Dcdiag

To get the most out of the tool, you should run it with administrative credentials.

To test a single server:

DOS

dcdiag /s:DC_Name

To identify and automatically fix minor DNS/Service record issues:

DOS

dcdiag /fix

Understanding the Scope

Dcdiag is flexible. You can target:

  1. A Single Server: For local troubleshooting.
  2. A Site: To check health within a specific physical location.
  3. The Entire Enterprise: To ensure forest-wide health.

The LazyAdmin Lesson: Make dcdiag a part of your weekly routine. Catching a replication error on Monday is much easier than fixing a fragmented database on Friday afternoon!

#ActiveDirectory #Dcdiag #SysAdmin #WindowsServer #ITPro #TechSupport #ServerHealth #LazyAdmin #ADTroubleshooting #DataCenter

How to Boot a Windows Server 2003 DC into Directory Services Restore Mode (DSRM)

Posted on Updated on

There are times when Active Directory becomes unstable, or you need to perform a “System State” restore. To do this, you must take the Domain Controller offline by booting into Directory Services Restore Mode (DSRM).

In this mode, the server stops functioning as a DC and instead functions as a standalone member server, allowing you to manipulate the AD database files (ntds.dit) while they aren’t in use.

⚠️ The Golden Rule of DSRM: The Password

When you boot into DSRM, Active Directory is not running. This means you cannot log in with your Domain Admin credentials.

You must use the Local Administrator account, and the password is the unique DSRM Password that was set years ago when the server was first promoted to a Domain Controller (via dcpromo).

Tip: If you’ve forgotten this password but the server is still currently running as a DC, you can reset it before rebooting using the setdsrmpassword command in ntdsutil.


Step-by-Step: Booting into DSRM Locally

If you have physical access (or console access via iDRAC/iLO/vCenter) to the machine, follow these steps:

  1. Initiate a Restart: Restart the Domain Controller as you normally would.
  2. The F8 Menu: As soon as the BIOS screen disappears and the Operating System selection menu appears, start tapping the F8 key.
  3. Advanced Options: You will be presented with the Windows Advanced Options Menu. Use the arrow keys to select Directory Services Restore Mode (Windows domain controllers only) and press Enter.
  4. Login: Once the Windows login screen appears, log on as the Local Administrator using that specific DSRM password.

What happens in this mode?

  • The NTDS service is stopped.
  • The server does not respond to authentication requests from users.
  • The local SAM (Security Accounts Manager) database handles authentication.
  • You can now run ntdsutil or backup software to perform database maintenance or restores.

#ActiveDirectory #DSRM #SysAdmin #WindowsServer #ITPro #TechSupport #ServerAdmin #LazyAdmin #Troubleshooting #LegacyIT

How to Change the Static IP Address of a Windows Domain Controller

Posted on Updated on

Whether you are re-IPing a subnet or moving a server to a new VLAN, changing a Domain Controller’s IP address requires more than just updating the NIC settings. If DNS records don’t update correctly, users won’t be able to log in, and replication will fail.

Prerequisites

  • Credentials: You must be a member of the Domain Admins group.
  • Access: Log on locally to the system console. If you lose network connectivity during the change, you may need to boot into DSRM to recover.

Step-by-Step: Changing the IP Address

  1. Open Network Connections: Right-click My Network Places (or Network in newer versions) and click Properties.
  2. Edit Adapter: Right-click your Local Area Connection and select Properties.
  3. TCP/IP Settings: Double-click Internet Protocol (TCP/IP).
  4. Update Addresses:
    • Enter the new IP address, Subnet mask, and Default gateway.
    • Update the Preferred and Alternate DNS servers.
    • Note: Usually, a DC points to itself (127.0.0.1) or a partner DC for DNS.
  5. WINS (Optional): If your environment still uses WINS, click Advanced > WINS tab and update any static WINS server entries.
  6. Apply: Click OK until all dialog boxes are closed.

Critical Step: Post-Change Registration

Once the IP is changed, Windows needs to tell the rest of the domain where the DC is now located. Do not skip these commands.

Open a Command Prompt and run:

  1. Register DNS Records:DOSipconfig /registerdns This forces the DC to update its ‘A’ (Host) record in DNS.
  2. Fix Service Records:DOSdcdiag /fix This ensures that vital SRV records (which clients use to find the DC) are updated to point to the new IP.

Potential Pitfalls: Mapped Drives and Hardcoded IPs

Changing the IP settings won’t affect shared permissions, but it will break any connection made via IP address rather than hostname.

  • Avoid This: net use g: \\192.168.0.199\data (This breaks after the change).
  • Do This: net use g: \\DC1\data (This continues to work regardless of the IP).

The LazyAdmin Lesson: Always use DNS names (Hostnames) for your resources. It saves you from manual updates every time a server moves!

ActiveDirectory #SysAdmin #WindowsServer #Networking #IPAddress #ITPro #DNS #Troubleshooting #LazyAdmin #ServerAdmin