This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Node Expansion

1 - Node Expansion for 16G

Introduction

This document will guide users through how to add one or more scale unit nodes to their Dell Integrated System for Microsoft Azure Stack Hub that is fully installed and operational.

For the official Microsoft documentation on node expansion, see Add scale unit nodes - Azure Stack Hub.

Solution overview

The only way to increase the capacity of an Azure Stack Hub integrated system is to add more physical computers to the existing scale unit. The scale unit is a collection of physical computers that work together to provide compute, storage, and networking resources. Each physical computer in the scale unit is referred to as a scale unit node.

In order to add a scale unit node, you will need to physically rack, stack, and cable the new node(s), configure Top-of-Rack (ToR) switches, ensure firmware and BIOS configuration match the existing nodes, and add the new node(s) to the Azure Stack Hub integrated system via the Azure Stack Hub administrator portal. This document will guide you through this process.

Audience

This node expansion guide is for Azure Stack Hub 16th-generation (16G) operators and the Dell Customer Service team who intend to add scale unit nodes to an existing Azure Stack Hub integrated system.

End-to-end deployment workflow

Node expansion workflow


Prerequisites

Ensure the following before you can add a node:

  • Administrator access to the Azure Stack Hub integrated system.
  • The rack and the power distribution unit (PDU) must be able to accommodate the new nodes.
  • New scale unit (SU) nodes must use the same hardware configuration as the existing Azure Stack Hub scale unit nodes.
  • The Azure Stack Hub integrated system must have the most current Microsoft and Dell Technologies patches and updates. If it does not, update it with the most recent patches and update versions before starting the node expansion process.
  • The Azure Stack Hub integrated system must be healthy. Check the health state by logging in to the Azure Stack Hub administrator portal. Any active health alerts must be resolved before adding a scale unit node.

Rack, stack, and cable physical nodes

After the new scale unit node or nodes arrive at the customer site, on-site engineers must perform the manual process to rack, stack, and cable the new node(s).

Refer to the following diagrams for Top-of-Rack (ToR) switches cabling guidance.

Scale unit node to ToR-1 network cabling


Scale unit node to ToR-2 network cabling


Scale unit node to ToR-1 iDRAC cabling


Scale unit node to ToR-2 iDRAC cabling


Check Component Readiness

After the new scale unit node or nodes are racked and cabled, power on the new node or nodes and check the LED indicator lights to ensure that all power supply and network cables are connected. See steps below on checking power readiness and checking network fabric connectivity before proceeding with a scale unit node expansion.

Check Power Readiness

To check power readiness, perform the following steps:

Steps
  1. Use a separate power bus for each power distribution unit (PDU).
  2. Ensure that the PDUs are firmly connected to the applicable power sources.
  3. Ensure that the PDUs are powered on.
  4. All servers and switches are equipped with dual power supplies. Ensure that these power supplies are connected to separate PDUs to ensure power redundancy.

Checking network fabric connectivity

Ensure that the new and existing scale unit nodes are connected to the ToR switches.

  1. The odd numbered nodes iDRAC (OoB) port connects to the ToR-1 switch.
  2. The even numbered nodes iDRAC (OoB) port connects to the ToR-2 switch.
  3. The Slot 6 Port 1 port is connected to the ToR-1 switch.
  4. The Slot 6 Port 2 port is connected to the ToR-2 switch.

The following figure shows the port locations for an AS-760 server.

Port Locations for AS-760


S5248F-ON ToR-1 port map

The following table lists the Slot 6 Port 1 connections, cable types, node ports, and switch ports to the S5248F-ON ToR-1 switch.

Origin Destination Cable Type
ToR-1 Port 1 Slot 6 Port 1 on Node 1 (AS-760) 25 GbE Twinax
ToR-1 Port 2 Slot 6 Port 1 on Node 2 (AS-760) 25 GbE Twinax
ToR-1 Port 3 Slot 6 Port 1 on Node 3 (AS-760) 25 GbE Twinax
ToR-1 Port 4 Slot 6 Port 1 on Node 4 (AS-760) 25 GbE Twinax
ToR-1 Port 5 Slot 6 Port 1 on Node 5 (AS-760) 25 GbE Twinax
ToR-1 Port 6 Slot 6 Port 1 on Node 6 (AS-760) 25 GbE Twinax
ToR-1 Port 7 Slot 6 Port 1 on Node 7 (AS-760) 25 GbE Twinax
ToR-1 Port 8 Slot 6 Port 1 on Node 8 (AS-760) 25 GbE Twinax
ToR-1 Port 9 Slot 6 Port 1 on Node 9 (AS-760) 25 GbE Twinax
ToR-1 Port 10 Slot 6 Port 1 on Node 10 (AS-760) 25 GbE Twinax
ToR-1 Port 11 Slot 6 Port 1 on Node 11 (AS-760) 25 GbE Twinax
ToR-1 Port 12 Slot 6 Port 1 on Node 12 (AS-760) 25 GbE Twinax
ToR-1 Port 13 Slot 6 Port 1 on Node 13 (AS-760) 25 GbE Twinax
ToR-1 Port 14 Slot 6 Port 1 on Node 14 (AS-760) 25 GbE Twinax
ToR-1 Port 15 Slot 6 Port 1 on Node 15 (AS-760) 25 GbE Twinax
ToR-1 Port 16 Slot 6 Port 1 on Node 16 (AS-760) 25 GbE Twinax


The following table lists the iDRAC connections, cable types, node ports, and switch ports to the S5248F-ON ToR-1 switch.

Origin Destination Cable Type
ToR-1 Port 25 iDRAC on Node 1 (AS-760) 1GbE Cat-6
ToR-1 Port 26 iDRAC on Node 3 (AS-760) 1GbE Cat-6
ToR-1 Port 27 iDRAC on Node 5 (AS-760) 1GbE Cat-6
ToR-1 Port 28 iDRAC on Node 7 (AS-760) 1GbE Cat-6
ToR-1 Port 29 iDRAC on Node 9 (AS-760) 1GbE Cat-6
ToR-1 Port 30 iDRAC on Node 11 (AS-760) 1GbE Cat-6
ToR-1 Port 31 iDRAC on Node 13 (AS-760) 1GbE Cat-6
ToR-1 Port 32 iDRAC on Node 15 (AS-760) 1GbE Cat-6

S5248F-ON ToR-2 port map

The following table lists the Slot 6 Port 2 connections, cable types, node ports, and switch ports to the S5248F-ON ToR-2 switch.

Origin Destination Cable Type
ToR-2 Port 1 Slot 6 Port 2 on Node 1 (AS-760) 25 GbE Twinax
ToR-2 Port 2 Slot 6 Port 2 on Node 2 (AS-760) 25 GbE Twinax
ToR-2 Port 3 Slot 6 Port 2 on Node 3 (AS-760) 25 GbE Twinax
ToR-2 Port 4 Slot 6 Port 2 on Node 4 (AS-760) 25 GbE Twinax
ToR-2 Port 5 Slot 6 Port 2 on Node 5 (AS-760) 25 GbE Twinax
ToR-2 Port 6 Slot 6 Port 2 on Node 6 (AS-760) 25 GbE Twinax
ToR-2 Port 7 Slot 6 Port 2 on Node 7 (AS-760) 25 GbE Twinax
ToR-2 Port 8 Slot 6 Port 2 on Node 8 (AS-760) 25 GbE Twinax
ToR-2 Port 9 Slot 6 Port 2 on Node 9 (AS-760) 25 GbE Twinax
ToR-2 Port 10 Slot 6 Port 2 on Node 10 (AS-760) 25 GbE Twinax
ToR-2 Port 11 Slot 6 Port 2 on Node 11 (AS-760) 25 GbE Twinax
ToR-2 Port 12 Slot 6 Port 2 on Node 12 (AS-760) 25 GbE Twinax
ToR-2 Port 13 Slot 6 Port 2 on Node 13 (AS-760) 25 GbE Twinax
ToR-2 Port 14 Slot 6 Port 2 on Node 14 (AS-760) 25 GbE Twinax
ToR-2 Port 15 Slot 6 Port 2 on Node 15 (AS-760) 25 GbE Twinax
ToR-2 Port 16 Slot 6 Port 2 on Node 16 (AS-760) 25 GbE Twinax


The following table lists the iDRAC connections, cable types, node ports, and switch ports to the S5248F-ON ToR-2 switch.

Origin Destination Cable Type
ToR-2 Port 25 iDRAC on Node 2 (AS-760) 1GbE Cat-6
ToR-2 Port 26 iDRAC on Node 4 (AS-760) 1GbE Cat-6
ToR-2 Port 27 iDRAC on Node 6 (AS-760) 1GbE Cat-6
ToR-2 Port 28 iDRAC on Node 8 (AS-760) 1GbE Cat-6
ToR-2 Port 29 iDRAC on Node 10 (AS-760) 1GbE Cat-6
ToR-2 Port 30 iDRAC on Node 12 (AS-760) 1GbE Cat-6
ToR-2 Port 31 iDRAC on Node 14 (AS-760) 1GbE Cat-6
ToR-2 Port 32 iDRAC on Node 16 (AS-760) 1GbE Cat-6

Configure ToR Switches

Using a crash cart with a serial connection or an SSH connection to the ToR switches, ensure that all ports that have a new node connected have been configured correctly.

To configure the ToR switches, perform the following steps:

Steps

  1. Log in to the S5248F-ON ToR-1 switch

  2. Once logged in type the commands below to configure the data link connections to the newly added scale unit nodes. For example, if wishing to expand a four-node scale unit with additional four nodes, you would run the below on the switch:

    conf t
    interface range ethernet 1/1/5-1/1/8
    description "CL01 Nodes NIC"
    no shutdown
    switchport mode trunk
    switchport access vlan 7
    switchport trunk allowed vlan 107
    mtu 9216
    flowcontrol receive off
    priority-flow-control mode on
    service-policy input type network-qos AZS_SERVICES_pfc
    service-policy output type queuing AZS_SERVICES_ets
    ets mode on
    qos-map traffic-class AZS_SERVICES_Que
    spanning-tree bpduguard enable
    spanning-tree guard root
    spanning-tree port type edge
    exit
    
  3. Once the data link port or ports are configured on the switch proceed to configuring the BMC Management ports. For example, if wishing to expand a four-node scale unit with an additional four nodes, you would run the below on the switch:

    conf t
    interface range ethernet 1/1/27:1-1/1/28:1
    description "BMCMgmt Ports"
    no shutdown
    switchport access vlan 125
    mtu 9216
    flowcontrol receive off
    spanning-tree bpduguard enable
    spanning-tree guard root
    end
    
  4. Once the BMC Management port or ports have been configured you will need to run the command below to write the new configuration into memory on the switch:

    copy running-configuration startup-configuration
    
  5. Once you have completed steps 1-4, repeat them on the ToR-2 switch.

Accessing the iDRAC Direct port

The iDRAC Direct port is a USB port located on the front of the server and is used to access the iDRAC web interface, RACADM, and Redfish API, without needing to connect to the network.

To access the iDRAC Direct port, perform the following steps:

Steps

  1. To access the iDRAC Direct port, you must connect a USB Type A to micro-USB cable from a laptop or mobile KVM host to the micro-USB port on the front of the server.
  2. From your host, turn off any wireless networks and disconnect from any other hard-wired networks.
  3. Connect a USB Type A to micro-USB cable from your host to the iDRAC Direct micro-USB port located on the front control panel of the AS-760 server.

AS-760 front view with iDRAC Direct port highlighted


  1. Wait for the host to acquire an IP address of 169.254.0.4. It may take several seconds for the IP address to be acquired. The iDRAC will acquire an IP address of 169.254.0.3.
  2. Open a web browser and provide the iDRAC Direct port IP address as the URL. For example, https://169.254.0.3.
  3. At the certificate warning window, click Advanced and then click Proceed to 169.254.0.3 (unsafe).

Certificate warning window


  1. Enter the factory default username and password for the iDRAC, and click Log In.

iDRAC 9 login screen


Assigning iDRAC IP addresses to the new scale unit nodes for expansion

The iDRAC IP address assignment is a manual step. The iDRAC Direct port can be leveraged on the node(s) to assign the iDRAC IP addresses based on the assigned BMC management (BMCMgmt) network IP address from the Azure Stack Hub deployment worksheet.

To manually assign the iDRAC IP addresses to the new scale unit nodes for expansion, perform the following steps:

Steps

The below steps will be repeated for each new scale unit node to be added.

  1. Follow the steps previously mentioned on Accessing the iDRAC Direct port.
  2. From the iDRAC dashboard, browse to iDRAC Settings > Connectivity and expand the Network > IPv4 Settings menu.

iDRAC Settings > Network > IPv4 Settings menu


  1. Set the static IP address, static gateway, and static subnet mask according to the values defined in the Azure Stack Hub deployment worksheet. These IP addresses are defined by the BMCMgmt /26 subnet provided to the deployment worksheet.

Azure Stack Hub deployment worksheet BMCMgmt network


  1. Click Apply.

Perform a health check on the new nodes

Prior to running the node expansion script, it is important to perform a quick health check of the new node(s). This also allows the iDRAC to perform an inventory collection which is needed for the firmware upgrade automation to successfully validate the new node(s).

To perform a health check on the new nodes for expansion, perform the following steps:

Steps

The below steps will be repeated for each new scale unit node to be added.

  1. If the new node(s) are powered off, press the power button to boot the new node(s).
  2. Allow up to 10 minutes for the new node(s) to fully load BIOS settings and complete the BIOS initialization.
  3. Once complete, access the iDRAC web interface either via the iDRAC Direct port or by leveraging a remote desktop (RDP) connection from the Hardware Lifecycle Host (HLH).
  4. Verify there are no alerts or warnings on the dashboard.


  1. Navigate to System > Overview > Network Devices and verify that NIC Slot 6 shows both ports with a Link Status of Up.


  1. Navigate to Storage > Overview > Physical Disks and verify all drives are present and healthy.


  1. Navigate to Storage > Overview > Virtual Disks and verify a virtual disk is present.


  1. Once complete, navigate back to the Dashboard and power off the node.


Update firmware on new scale unit nodes

This section covers updating firmware on scale unit node(s).

Node expansion script

The node expansion script is used to update firmware and apply BIOS configuration on the new scale unit node(s) before they are added to the Azure Stack Hub integrated system.

The script will also update the DeploymentData JSON file with the new scale unit node(s) information.

The script will not add the new scale unit node(s) to the Azure Stack Hub integrated system, see the Add scale unit node in the Azure Stack Hub administrator portal section for more information on how to add the new scale unit node(s) to the Azure Stack Hub integrated system.

Dell Integrated System for Microsoft Azure Stack Hub Lifecycle Manager contains the node expansion script, Invoke-DellAzSHubNodeExpansion.ps1. After completing the Patch and Update process, you should have the latest Lifecycle Manager available in E:\LCM.


To run the node expansion script, perform the following steps:

Steps

  1. From the HLH, open a PowerShell console window as an administrator.

  2. Before running the node expansion script, make sure you have the following information:

    • Number of nodes being added
    • Factory BMC user credentials
    • BMC administrator credentials
    • HLH administrator credentials

  3. Change the directory to E:\LCM

    Set-Location -Path E:\LCM
    
  4. Run the following command to start the expansion process. Change the “X” to the number of nodes that are being added.

    .\Invoke-DellAzSHubNodeExpansion.ps1 -AdditionalNodeCount X
    
  5. You will be prompted to input the BMC user, BMC administrator and HLH administrator credentials before the upgrade begins.

The automation will run the firmware update process one node at a time. Each node will take about an hour to complete.

Once the firmware update process is complete on all nodes your prompt will look as shown below. The new nodes have now been added to the DeploymentData JSON file and are ready to be added to the cluster from the Azure Stack Hub administrator portal:

(...)
VERBOSE: 20250214-225038:Invoke-FirmwarePostUpdate:Invoke-OEMFirmwarePostUpdate completed successfully.
VERBOSE: 20250214-225038:Remove-AutoLogon:Removing auto admin logon.
VERBOSE: 20250214-225038:Resume-HLHBitLocker:Importing BitLocker module.
VERBOSE: 20250214-225038:Resume-HLHBitLocker:Getting BitLocker encrypted volumes.
VERBOSE: 20250214-225039:Resume-HLHBitLocker:Restoring TPM protector on volume 'D:'.
VERBOSE: 20250214-225039:Disable-DHCPServer:Disabling DHCP server service.
VERBOSE: 20250214-225039:Invoke-OEMFirmwareBootstrap:PROGRESS - Cleanup complete.
VERBOSE: 20250214-225039:Invoke-OEMFirmwareBootstrap:PROGRESS - Invoke-OEMFirmwareBootstrap completed successfully.
Finished running Invoke-FirmwareBootstrap for sac42-S1-N08 - 10.128.164.74 with Deployment Data JSON: E:\AzureStack\DeploymentData_new_one.json
  > List of nodes in the Deployment Data JSON: 'E:\AzureStack\DeploymentData_new_one.json'.

Name         BMCIPAddress
----         ------------
sac42-S1-N01 10.128.164.67
sac42-S1-N02 10.128.164.68
sac42-S1-N03 10.128.164.69
sac42-S1-N04 10.128.164.70
sac42-S1-N05 10.128.164.71
sac42-S1-N06 10.128.164.72
sac42-S1-N07 10.128.164.73
sac42-S1-N08 10.128.164.74

(...)

  > Successfully replaced the original Deployment Data JSON with the new one.

Locate Logs

To locate the logs from the node expansion script, perform the following steps:

Steps

  1. From the HLH, open File Explorer and navigate to the C:\MASLogs folder.

  2. The logs that were generated from the node expansion script will have the filename: OEMFirmwareUpdate_[date]-[time].


Add scale unit node in the Azure Stack Hub administrator portal

The operation to add a scale unit node consists of two distinct phases: compute and storage.

The compute expansion process can take between 1-3 hours to complete per scale unit node. The storage expansion process can take several days to complete, depending on the size of the storage pool and the number of scale unit nodes being added.

Before adding a scale unit node within the Azure Stack Hub administrator portal, ensure that you have completed all the steps below:

Steps

  1. Log into the Azure Stack Hub administrator portal as an Azure Stack Hub administrator.
  2. Browse to All services > Region management > Scale units > [Cluster name] > Nodes.
  3. Click the Add node button.


  1. The Region and the Scale unit will be populated automatically. You will need to specify the BMC IP Address of the scale unit node you are adding.


  1. Once you have entered the IP address of the new scale unit node, click Add at the bottom of the screen.

  2. Click the notifications in the upper right to check the status as shown below:


  1. Once the scale unit node expansion compute process is complete, your notifications will show as the following:


  1. In order to check the status of the storage expansion provisioning task you can navigate to All services > Region management > Scale units. Once here, you will see the status as Configuring Storage if the storage expansion provisioning task is not yet complete. When this task is complete the status will change to Running.

Copyright © 2025 Dell Inc. | Terms of Sale | Privacy Statement | Do Not Sell or Share My Personal Information | Cookies, Ads & Emails | Legal & RegulatoryAll Rights Reserved