Dell Technologies Solutions for Microsoft Azure Stack
Dell Integrated System for Microsoft Azure Stack Hub
This hybrid cloud platform delivers infrastructure and platform as a service (IaaS and PaaS) with a consistent Azure experience on-premises and in the public cloud.
Access, create, and share application services securely across Azure and Azure Stack for both traditional and cloud-native applications.
Get hyper-converged infrastructure, networking, backup, and encryption from Dell Technologies, with application development tools from Microsoft.
It delivers enterprise-grade performance and resiliency, including integrated deployment services from Dell Technologies experts.
One-contact support reduces your operational risk, while flexible consumption models make it easy to use.
Dell Integrated System for Microsoft Azure Stack HCI
A purpose-built system designed to simplify and streamline Azure multi-cloud ecosystem with integrated fully engineered infrastructure foundation.
Designed with full-stack lifecycle management and native Azure integration, the integrated system delivers efficient operations, flexible consumption models and high-level enterprise expertise.
1 - Azure Stack Hub
Dell Integrated System for Microsoft Azure Stack Hub
Run your own private, autonomous cloud — connected or disconnected with cloud-native apps using consistent Azure services on-premises.
Run connected or disconnected from the public cloud
Comply with data sovereignty laws and regulations
Run Azure-consistent Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS)
Build cloud-native modern apps
1.1 - Release Artifacts
1.1.1 - Release Artifacts for 2407
1.1.1.1 - Release Artifacts for 14G - 2407
Dell Customer Tools
Component
File Name
Supported Version
Dell Customer Toolkit
AzS_DellEMC_CustomerToolkit_2407.6.zip
2407.6
Dell OEM extension package for drivers and firmware updates
Specific to the Windows Server 2019 ASDB image on the Hardware Lifecycle Host.
Windows Server 2019 ASDB SSU
KB5018507
Specific to the Windows Server 2019 ASDB image on the Hardware Lifecycle Host.
OS9 switch firmware (S3048, S4048, S5048)
9.14.2.20
OS9 switch firmware code is in the Dell Customer Toolkit.
OS10 switch firmware (S5248, N3248)
10.5.5.3
OS10 switch firmware code is in the Dell Customer Toolkit.
OEM extension package
2.3.2306.1
Included in the Dell Customer Toolkit and contains the driver and firmware update payload.
Firmware Update Module in OEM Extension Package
2.2.2204.1
N/A
Updated OEM Package components
Server Type
Platform
OS Type
Component
Type
Category
Dell P/N
Previous SWB
Target SWB
Previous Version
Target Version
Scale Units (PowerEdge R840 AS Dense)
R840
N/A
BIOS
Firmware DUP
BIOS
N/A
PC6KF
PF39N
2.17.1
2.18.1
Hardware Lifecycle Host/HLH (PowerEdge R640)
R640
N/A
BIOS
Firmware DUP
BIOS
N/A
NVD2K
YM8R4
2.17.1
2.18.1
Scale Units (PowerEdge R640 AS All Flash)
R640
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Hardware Lifecycle Host/HLH (PowerEdge R640)
R640
N/A
iDRAC with Lifecycle controller
Firmware DUP
iDRAC with Lifecycle Controller
N/A
V676X
Y0CWW
6.10.30.10
6.10.80.00
Scale Units (PowerEdge R640 AS All Flash)
R640
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
INTEL S4520 RR M.2 SSD SSDSC2KB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GX439
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
N/A
INTEL S4520 RR M.2 SSD SSDSC2KB480GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
F6H8H
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R840 AS Dense)
R840
N/A
INTEL S4520 RR M.2 SSD SSDSC2KB960GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GX439
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
INTEL S4520 RR M.2 SSD SSDSC2KB960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
F6H8H
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
INTEL S4520 RR M.2 SSD SSDSCKKB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
M7F5D
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
INTEL S4620 RR M.2 SSD SSDSC2KG480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
00DJ5
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
INTEL S4620 RR M.2 SSD SSDSC2KG960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
8MHYH
PN1T8
J3GJ8
DL70
DL74
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
Hardware Lifecycle Host/HLH (PowerEdge R640)
R640
N/A
KIOXIA KPM6WRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD (Boot)
7F2D1
376CY
G42W2
BD0D
BD48
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
KIOXIA KPM6WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FH1W9
376CY
G42W2
BD0D
BD48
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R740xd)
R740xd
N/A
KIOXIA KPM6WRUG960G
Firmware DUP
Storage - 960GB SAS SSD
R9RTY
376CY
G42W2
BD0D
BD48
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
KIOXIA KPM6WVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
1081V
376CY
G42W2
BD0D
BD48
Scale Units (PowerEdge R640 AS All Flash)
R640
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R640 Tactical)
R640
Hardware Lifecycle Host/HLH (PowerEdge R640)
R640
N/A
KIOXIA KPM6WVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD (Boot)
DHWH5
376CY
G42W2
BD0D
BD48
Hardware Lifecycle Host/HLH (PowerEdge R640)
R640
Scale Units (PowerEdge R640 AS All Flash)
R640
N/A
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
81H9C
376CY
G42W2
BD0D
BD48
Scale Units (PowerEdge R640 AS All Flash)
R640
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R740xd)
R740xd
N/A
KIOXIA KPM6WVUG960G
Firmware DUP
Storage - 960GB SAS SSD
J92FY
376CY
G42W2
BD0D
BD48
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R640 AS All Flash)
R640
WS2022
Mellanox ConnectX-4 LX / 25GbE
Driver DUP
Network / RDMA
N/A
JX4HG
Y31G3
03.00.01
03.20.02
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R640 AS All Flash)
R640
WS2019
Mellanox ConnectX-4 LX / 25GbE
Driver DUP
Network / RDMA
N/A
JX4HG
Y31G3
03.00.01
03.20.02
Scale Units (PowerEdge R640 Tactical)
R640
Scale Units (PowerEdge R740xd)
R740xd
Scale Units (PowerEdge R840 AS Dense)
R840
Scale Units (PowerEdge R740xd)
R740xd
N/A
SEAGATE ST8000NM024B
Firmware DUP
Storage - 8TB SAS HDD
C5HD0
DTGXD
24J9V
LS0A
LS0C
N/A
14G
N/A
S3048-ON
Switch Firmware
BMC
N/A
T9HGC
9GWDW
9.14.2.18
9.14.2.20
N/A
14G
N/A
S4048-ON
Switch Firmware
TOR
N/A
FX0G8
1NWP3
9.14.2.18
9.14.2.20
N/A
14G
N/A
S5048F-ON
Switch Firmware
TOR
N/A
CYRMP
FK3NJ
9.14.2.18
9.14.2.20
N/A
14G
N/A
S5248F-ON
Switch Firmware
TOR
N/A
K7FT4
H9HK2
10.5.4.7
10.5.5.3
N/A
14G
N/A
N3248TE-ON
Switch Firmware
BMC
N/A
K7FT4
H9HK2
10.5.4.7
10.5.5.3
1.2 - Release Notes
1.2.1 - Release Notes for 2407
1.2.1.1 - Release Notes for 14G - 2407
Dell Integrated System for Microsoft Azure Stack Hub Release Notes
Current Release Version: Dell 2407 and Microsoft 2406
Release Type: Major (MA)
NOTE
Dell Azure Stack Hub OEM updates must be installed in sequential order, it is not supported to skip or miss-out an OEM update version. These release notes contain supplemental information for the Dell 2407 release and the Microsoft 2406 release.
New features, changed features, and fixes
New features
There are improvements and updates to drivers and firmware.
Secure Connect Gateway (SCG) will be uninstalled from existing deployments.
Changed features
There are no changed features for this release.
Fixes
There are no fixes for this release.
Known issues and limitations
This release notes document describes known issues and limitations for the Dell Integrated System for Microsoft Azure Stack Hub solution based on the Dell 2407 release and Microsoft 2406 release.
Item
Description
OEM update
Dell Technologies recommends updating to the n-1 version before applying the latest OEM package.
Microsoft Azure Stack Hub code
For information about known issues and limitations in the Microsoft Azure Stack Hub code, see the Azure Stack Hub 2406 update on the Microsoft website
Notes and warnings
CAUTION
Before you use the Microsoft Patch and Update process to update Azure Stack Hub, close any active session to the ERCS virtual machines. If an active session is open, the update may fail, and must be resumed.
Microsoft fixed issues
For information about fixed issues in this release, see the Azure Stack Hub 2406 update on the Microsoft website.
The Dell Azure Stack Hub Tech Book contains information regarding dimensions for Dell Technologies racks, servers, and switches.
This information is updated as needed.
Software Versions
Component
Version
Notes
Microsoft Azure Stack Hub Baseline (for bare-metal deployment)
Firmware and driver versions of each individual component can be found in the Dell Integrated System for Microsoft Azure Stack Hub Support Matrix.
Fixes, enhancements, and other information about each firmware and driver can be found on the Dell Support site.
1.2.1.2 - Release Notes for 16G - 2407
Dell Integrated System for Microsoft Azure Stack Hub Release Notes
Current Release Version: Dell 2407 and Microsoft 2406
Release Type: Major (MA)
NOTE
Dell Azure Stack Hub OEM updates must be installed in sequential order, it is not supported to skip or miss-out an OEM update version. These release notes contain supplemental information for the Dell 2407 release and the Microsoft 2406 release.
New features, changed features, and fixes
New features
There are improvements and updates to drivers and firmware.
Changed features
There are no changed features for this release.
Fixes
There are no fixes for this release.
Known issues and limitations
This release notes document describes known issues and limitations for the Dell Integrated System for Microsoft Azure Stack Hub solution based on the Dell 2407 release and Microsoft 2406 release.
Item
Description
OEM update
Dell Technologies recommends updating to the n-1 version before applying the latest OEM package.
Microsoft Azure Stack Hub code
For information about known issues and limitations in the Microsoft Azure Stack Hub code, see the Azure Stack Hub 2406 update on the Microsoft website
Notes and warnings
CAUTION
Before you use the Microsoft Patch and Update process to update Azure Stack Hub, close any active session to the ERCS virtual machines. If an active session is open, the update may fail, and must be resumed.
Microsoft fixed issues
For information about fixed issues in this release, see the Azure Stack Hub 2406 update on the Microsoft website.
The Dell Azure Stack Hub Tech Book contains information regarding dimensions for Dell Technologies racks, servers, and switches.
This information is updated as needed.
Software Versions
Component
Version
Notes
Microsoft Azure Stack Hub Baseline (for bare-metal deployment)
Firmware and driver versions of each individual component can be found in the Dell Integrated System for Microsoft Azure Stack Hub Support Matrix.
Fixes, enhancements, and other information about each firmware and driver can be found on the Dell Support site.
1.2.2 - Release Notes for 2404
1.2.2.1 - Release Notes for 14G - 2404
Dell Integrated System for Microsoft Azure Stack Hub Release Notes
Current Release Version: Dell 2404 and Microsoft 2311
Release Type: Major (MA)
NOTE
Dell Azure Stack Hub OEM updates must be installed in sequential order, it is not supported to skip or miss-out an OEM update version. These release notes contain supplemental information for the Dell 2404 release and the Microsoft 2311 release.
New features, changed features, and fixes
New features
There are improvements and updates to drivers and firmware.
Changed features
In this release, we included Windows Server 2022 image for both Management Virtual Machine (MGMT-VM) and the Hardware Lifecycle Host (HLH).
To take advantage of the new features and enhancements in Windows Server 2022, you must redeploy the HLH using the Operating System Deployment (OSD) workflow in the Dell Patch and Update tool.
This release notes document describes known issues and limitations for the Dell Integrated System for Microsoft Azure Stack Hub solution based on the Dell 2404 release and Microsoft 2311 release.
Item
Description
OEM update
Dell Technologies recommends updating to the n-1 version before applying the latest OEM package.
Microsoft Azure Stack Hub code
For information about known issues and limitations in the Microsoft Azure Stack Hub code, see the Azure Stack Hub 2311 update on the Microsoft website.
Secure Connect Gateway 5.18
SNMP v3 alerts are not received within Secure Connect Gateway version 5.18, see the Knowledge Base Article.
Patch and Update (PnU) Operating System Deployment (OSD)
Hardware Lifecycle Host (HLH) may end up in a BitLocker Recovery state during Operating System Deployment (OSD) workflow via the Patch and Update Tool when updating to Server 2022, see the Knowledge Base Article.
Notes and warnings
CAUTION
Before you use the Microsoft Patch and Update process to update Azure Stack Hub, close any active session to the ERCS virtual machines. If an active session is open, the update may fail and must be resumed.
Microsoft fixed issues
For information about fixed issues in this release, see the Azure Stack Hub 2311 update on the Microsoft website.
The Dell Azure Stack Hub Tech Book contains information regarding dimensions for Dell Technologies racks, servers, and switches.
This information is updated as needed.
Software Versions
Component
Version
Notes
Microsoft Azure Stack Hub Baseline (for bare-metal deployment)
Firmware and driver versions of each individual component can be found in the Dell Integrated System for Microsoft Azure Stack Hub Support Matrix.
Fixes, enhancements, and other information about each firmware and driver can be found on the Dell Support site.
1.2.3 - Release Notes for 2401
1.2.3.1 - Release Notes for 14G - 2401
Dell Integrated System for Microsoft Azure Stack Hub Release Notes
Current Release Version: Dell 2401 and Microsoft 2306
Release Type: Major (MA)
NOTE
Dell Azure Stack Hub OEM updates must be installed in sequential order, it is not supported to skip or miss-out an OEM update version. These release notes contain supplemental information for the Dell 2401 release and the Microsoft 2306 release.
New features, changed features, and fixes
New features
There are improvements and updates to drivers and firmware.
Changed features
There are no changed features for this release.
Fixes
There are no fixes for this release.
Known issues and limitations
This release notes document describes known issues and limitations for the Dell Integrated System for Microsoft Azure Stack Hub solution based on the Dell 2401 release and Microsoft 2306 release.
Item
Description
OEM update
Dell Technologies recommends updating to the n-1 version before applying the latest OEM package.
Microsoft Azure Stack Hub code
For information about known issues and limitations in the Microsoft Azure Stack Hub code, see the Azure Stack Hub 2306 update on the Microsoft website
Secure Connect Gateway 5.18
SNMP v3 alerts are not received within Secure Connect Gateway version 5.18, see the Knowledge Base Article
Notes and warnings
CAUTION
Before you use the Microsoft Patch and Update process to update Azure Stack Hub, close any active session to the ERCS virtual machines. If an active session is open, the update may fail, and must be resumed.
Microsoft fixed issues
For information about fixed issues in this release, see the Azure Stack Hub 2306 update on the Microsoft website.
The Dell Azure Stack Hub Tech Book contains information regarding dimensions for Dell Technologies racks, servers, and switches.
This information is updated as needed.
Software Versions
Component
Version
Notes
Microsoft Azure Stack Hub Baseline (for bare-metal deployment)
Firmware and driver versions of each individual component can be found in the Dell Integrated System for Microsoft Azure Stack Hub Support Matrix.
Fixes, enhancements, and other information about each firmware and driver can be found on the Dell Support site.
1.2.4 - Release Notes for 2309
Dell Integrated System for Microsoft Azure Stack Hub Release Notes
Current Release Version: Dell 2309 and Microsoft 2306
Release Type: Major (MA)
NOTE
Dell Azure Stack Hub OEM updates must be installed in sequential order, it is not supported to skip or miss-out an OEM update version. These release notes contain supplemental information for the Dell 2309 release and the Microsoft 2306 release.
New features, changed features, and fixes
New features
Improvements and updates to Secure Connect Gateway (SCG), drivers, and firmware.
Dell Patch and Update support for Windows Server 2022 and Windows 11.
Changed features
There are no changed features for this release.
Fixes
Fixed an issue with the Dell Patch and Update process where the switch backup task could fail with the error message:
“The term ‘Write-InfoLog’ is not recognized as the name of a cmdlet, function, script file, or operable program”
Known issues and limitations
This release notes document describes known issues and limitations for the Dell Integrated System for Microsoft Azure Stack Hub solution based on the Dell 2309 release and Microsoft 2306 release.
Item
Description
OEM update
Dell Technologies recommends updating to the n-1 version before applying the latest OEM package.
Microsoft Azure Stack Hub code
For information about known issues and limitations in the Microsoft Azure Stack Hub code, see the Azure Stack Hub 2306 update on the Microsoft website
Notes and warnings
CAUTION
Before you use the Microsoft Patch and Update process to update Azure Stack Hub, close any active session to the ERCS virtual machines. If an active session is open, the update may fail, and must be resumed.
Microsoft fixed issues
For information about fixed issues in this release, see the Azure Stack Hub 2306 update on the Microsoft website.
The Dell Azure Stack Hub Tech Book contains information regarding dimensions for Dell Technologies racks, servers, and switches.
This information is updated as needed.
Software Versions
Component
Version
Notes
Microsoft Azure Stack Hub Baseline (for bare-metal deployment)
Firmware and driver versions of each individual component can be found in the Dell Integrated System for Microsoft Azure Stack Hub Support Matrix.
Fixes, enhancements, and other information about each firmware and driver can be found on the Dell Support site.
1.2.5 - Release Notes for 2306
Dell Integrated System for Microsoft Azure Stack Hub Release Notes
Current Release Version: Dell 2306 and Microsoft 2301
Release Type: Major (MA)
NOTE
Dell Azure Stack Hub OEM updates must be installed in sequential order, it is not supported to skip or miss-out an OEM update version. These release notes contain supplemental information for the Dell 2306 release and the Microsoft 2301 release.
New features, changed features, and fixes
New features
There are improvements and updates to drivers and firmware.
Fixed an issue with the Dell Patch and Update Tool for Azure Stack Hub which could fail during the Switch Firmware Update step with error ‘Error executing SSH command’.
For more information, please see the following KB article Azure Stack Hub - Patch and Update 2303 and newer may fail switch firmware update with ‘Error executing SSH command’ on the Dell Technologies Support website. Note: This article can only be accessed if you are a Dell Azure Stack Hub customer.
Known issues and limitations
This release notes document describes known issues and limitations for the Dell Integrated System for Microsoft Azure Stack Hub solution based on the Dell 2306 release and Microsoft 2301 release.
Item
Description
OEM update
Dell Technologies recommends updating to the n-1 version before applying the latest OEM package.
Microsoft Azure Stack Hub code
For information about known issues and limitations in the Microsoft Azure Stack Hub code, see the Azure Stack Hub 2301 update on the Microsoft website
Notes and warnings
CAUTION
Before you use the Microsoft Patch and Update process to update Azure Stack Hub, close any active session to the ERCS virtual machines. If an active session is open, the update may fail, and must be resumed.
Microsoft fixed issues
For information about fixed issues in this release, see the Azure Stack Hub 2301 update on the Microsoft website.
The Concepts Guide contains information regarding dimensions for Dell Technologies racks, servers, and switches.
This information is updated as needed.
Software Versions
Component
Version
Notes
Microsoft Azure Stack Hub Baseline (for bare-metal deployment)
Firmware and driver versions of each individual component can be found in the Dell Integrated System for Microsoft Azure Stack Hub Support Matrix.
Fixes, enhancements, and other information about each firmware and driver can be found on the Dell Support site.
1.3 - Support Matrix
1.3.1 - Support Matrix for 2407
1.3.1.1 - Support Matrix for 14G - 2407
Dell Integrated System for Microsoft Azure Stack Hub - Valid from Dell 2407 release and Microsoft 2406 release
Abstract
This support matrix provides information about supported software and hardware configurations for Dell Integrated System for Microsoft Azure Stack Hub.
Introduction
The Dell Integrated System for Microsoft Azure Stack Hub Support Matrix describes supported drivers, firmware, applications, and hardware for Dell Integrated System for Microsoft Azure Stack Hub.
NOTE
All references to release dates refer to Dell Technologies releases, unless otherwise indicated.
Dell Integrated System for Microsoft Azure Stack Hub - Valid from Dell 2407 release and Microsoft 2406 release
Abstract
This support matrix provides information about supported software and hardware configurations for Dell Integrated System for Microsoft Azure Stack Hub.
Introduction
The Dell Integrated System for Microsoft Azure Stack Hub Support Matrix describes supported drivers, firmware, applications, and hardware for Dell Integrated System for Microsoft Azure Stack Hub.
NOTE
All references to release dates refer to Dell Technologies releases, unless otherwise indicated.
Dell Integrated Systems for Microsoft Azure Stack Hub OEM extension package with drivers and firmware updates
AzSHub_16G_Dell2407.13_OEMPackage.zip
2407.13
Dell Integrated Systems for Microsoft Azure Stack Hub HLH ISO
MS2406_Dell2407.26.iso
2407.26
1.3.2 - Support Matrix for 2404
1.3.2.1 - Support Matrix for 14G - 2404
Dell Integrated System for Microsoft Azure Stack Hub - Valid from Dell 2404 release and Microsoft 2311 release
Abstract
This support matrix provides information about supported software and hardware configurations for Dell Integrated System for Microsoft Azure Stack Hub.
Introduction
The Dell Integrated System for Microsoft Azure Stack Hub Support Matrix describes supported drivers, firmware, applications, and hardware for Dell Integrated System for Microsoft Azure Stack Hub.
NOTE
All references to release dates refer to Dell Technologies releases, unless otherwise indicated.
Dell Integrated System for Microsoft Azure Stack Hub - Valid from Dell 2401 release and Microsoft 2306 release
Abstract
This support matrix provides information about supported software and hardware configurations for Dell Integrated System for Microsoft Azure Stack Hub.
Introduction
The Dell Integrated System for Microsoft Azure Stack Hub Support Matrix describes supported drivers, firmware, applications, and hardware for Dell Integrated System for Microsoft Azure Stack Hub.
NOTE
All references to release dates refer to Dell Technologies releases, unless otherwise indicated.
Dell Integrated System for Microsoft Azure Stack Hub - Valid from Dell 2309 release and Microsoft 2306 release
Abstract
This support matrix provides information about supported software and hardware configurations for Dell Integrated System for Microsoft Azure Stack Hub.
Introduction
The Dell Integrated System for Microsoft Azure Stack Hub Support Matrix describes supported drivers, firmware, applications, and hardware for Dell Integrated System for Microsoft Azure Stack Hub.
NOTE
All references to release dates refer to Dell Technologies releases, unless otherwise indicated.
Dell Integrated System for Microsoft Azure Stack Hub - Valid from Dell 2306 release and Microsoft 2301 release
Abstract
This support matrix provides information about supported software and hardware configurations for Dell Integrated System for Microsoft Azure Stack Hub.
Introduction
The Dell Integrated System for Microsoft Azure Stack Hub Support Matrix describes supported drivers, firmware, applications, and hardware for Dell Integrated System for Microsoft Azure Stack Hub.
NOTE
All references to release dates refer to Dell Technologies releases, unless otherwise indicated.
14G Scale Units - PowerEdge R740xd
Component
Type
Category
Dell Part Number (P/N)
Software Bundle (SWB)
Supported Version
INTEL C600/C610/C220/C230/C2000 Series
Driver DUP
Chipset
N/A
3DTYV
10.1.18807.8279
Mellanox ConnectX-4 LX / 25GbE
Driver DUP
Network / RDMA
N/A
Y31G3
03.20.02
Dell HBA330
Driver DUP
Storage - HBA
N/A
MF8G0
2.51.25.02
BIOS
Firmware DUP
BIOS
N/A
YM8R4
2.18.1
BOSS-S1 Firmware
Firmware DUP
BOSS-S1
N/A
3P39V
2.5.13.3024
CPLD
Firmware DUP
CPLD
N/A
G65GH
1.1.4
iDRAC with Lifecycle Controller
Firmware DUP
iDRAC with Lifecycle Controller
N/A
Y0CWW
6.10.80.00
Mellanox ConnectX-4 LX / 25GbE
Firmware DUP
Network/RDMA
N/A
PY7FC
14.32.20.04
Dell SEP Non-expander Storage Backplane
Firmware DUP
Non-expander Storage Backplane
N/A
VV85D
4.35
KIOXIA KPM6XVUG1T60
Firmware DUP
Storage - 1.6TB SAS SSD
GD3N0
6K5N9
BA0D
KIOXIA KPM7XVUG1T60
Firmware DUP
Storage - 1.6TB SAS SSD
4TRHM
69KVR
C106
SAMSUNG MZILG1T6HCJRAD9
Firmware DUP
Storage - 1.6TB SAS SSD
TK47C
G0NG4
DZG0
TOSHIBA PX05SMB160Y
Firmware DUP
Storage - 1.6TB SAS SSD
GVTYD
1DJXX
AS10
KIOXIA KPM5XVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
2WVYG
4P9DW
B026
KIOXIA KPM6XRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
4CN85
6K5N9
BA0D
KIOXIA KPM7WRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
VGMCD
2H3MY
C406
KIOXIA KPM7XRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
6K35K
25VW9
C106
KIOXIA KRM6VVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
N15JP
3P4FR
BJ02
SAMSUNG MZILG1T9HCJRAD9
Firmware DUP
Storage - 1.92TB SAS SSD
MFCD7
PFJ21
TBD
SEAGATE XS1920LE70134
Firmware DUP
Storage - 1.92TB SAS SSD
N6DRV
G04C8
4S0C
SEAGATE XS1920LE70154
Firmware DUP
Storage - 1.92TB SAS SSD
91DPV
84K2D
4D0B
TOSHIBA KPM5XVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
2WVYG
4P9DW
B026
TOSHIBA PX05SVB192Y
Firmware DUP
Storage - 1.92TB SAS SSD
V0K7V
1DJXX
AS10
KIOXIA KRM7VRUG1T92
Firmware DUP
Storage - 1.92TB vSAS SSD
86XW7
MRG3F
CA06
SEAGATE XS1920LE70095
Firmware DUP
Storage - 1.92TB vSAS SSD
K805Y
2PY92
CPE6
HGST HUH721010AL4200
Firmware DUP
Storage - 10TB SAS HDD
YG2KH
MGW91
LS21
HGST HUH721010AL5200
Firmware DUP
Storage - 10TB SAS HDD
07FPR
MGW91
LS21
SEAGATE ST10000NM0256
Firmware DUP
Storage - 10TB SAS HDD
YF87J
CD2WP
TT56
SEAGATE ST10000NM0598
Firmware DUP
Storage - 10TB SAS HDD
HV5CH
VTX9C
RSL5
TOSHIBA MG06SCA10TEY
Firmware DUP
Storage - 10TB SAS HDD
14YYC
4G5GY
EH0D
TOSHIBA MG07SCA12TEY
Firmware DUP
Storage - 12TB SAS HDD
DK7C9
7DTJD
EI0D
HGST HUH721212AL5200
Firmware DUP
Storage - 12TB SAS HDD
9HXK6
4RR8F
NS10
SEAGATE ST12000NM006J
Firmware DUP
Storage - 12TB SAS HDD
M1C0T
4JCT7
PSL9
SEAGATE ST12000NM009G
Firmware DUP
Storage - 12TB SAS HDD
7KT9W
31MF1
ESL3
SEAGATE ST12000NM0158
Firmware DUP
Storage - 12TB SAS HDD
YMN53
VTX9C
RSL5
TOSHIBA MG09SCA12TEY
Firmware DUP
Storage - 12TB SAS HDD
0N96X
XWG1Y
EM04
INTEL D3-S45100 RI M.2 SSD SSDSCKKB480G8R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7FXC3
Y1P10
DL6R
INTEL S3520 RI M.2 SSD SSDSCKJB480G7R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
WCP9P
CHJGV
DL43
INTEL S4520 RR M.2 SSD SSDSC2KB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GX439
J3GJ8
DL74
INTEL S4520 RR M.2 SSD SSDSCKKB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
M7F5D
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
00DJ5
J3GJ8
DL74
MICRON 5100 Pro M.2 SSD MTFDDAV480TCB (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GPGC0
YM8KY
E013
MICRON 5300 Pro M.2 SSD MTFDDAV480TDS (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7RKD7
PWVX5
J004
MICRON 5400 Pro M.2 SSD MTFDDAV480TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 480GB SATA SSD
VN68H
C2Y7D
K002
HGST HUS726040ALS210
Firmware DUP
Storage - 4TB SAS HDD
X4FKY
68X4C
KU45
HGST HUS726T4TALS200
Firmware DUP
Storage - 4TB SAS HDD
NT1X2
1DM5F
PU07
TOSHIBA MG04SCA40ENY
Firmware DUP
Storage - 4TB SAS HDD
1MVTT
RG9MK
EG03
SEAGATE ST4000NM017A
Firmware DUP
Storage - 4TB SAS HDD
KRM6X
4RM7F
DL67
SEAGATE ST4000NM019B
Firmware DUP
Storage - 4TB SAS HDD
10N7R
6X24T
LW08
SEAGATE ST4000NM0295
Firmware DUP
Storage - 4TB SAS HDD
W5M2R
XKD4M
DT34
SEAGATE ST6000NM035A
Firmware DUP
Storage - 6TB SAS HDD
CVTK9
61CR4
CSL7
TOSHIBA MG05ACA600E
Firmware DUP
Storage - 6TB SAS HDD
81Y15
5H8JW
GX6D
TOSHIBA MG06SCA600EY
Firmware DUP
Storage - 6TB SAS HDD
XXPPV
4G5GY
EH0D
KIOXIA KPM6XMUG800G
Firmware DUP
Storage - 800GB SAS SSD
H6GCD
6K5N9
BA0D
KIOXIA KPM7XVUG800G
Firmware DUP
Storage - 800GB SAS SSD
X96H8
T7WXC
C10A
WESTERN DIGITAL WUSTM3280BSS200
Firmware DUP
Storage - 800GB SAS SSD
F99F6
X95FJ
G130
TOSHIBA PX05SMB080Y
Firmware DUP
Storage - 800GB SAS SSD
CN3JH
1DJXX
AS10
SEAGATE ST8000NM0185
Firmware DUP
Storage - 8TB SAS HDD
M40TH
6421F
PT55
TOSHIBA MG08SDA800EY
Firmware DUP
Storage - 8TB SAS HDD
NJWMG
1RCYJ
EL01
HGST HUH721008AL4200
Firmware DUP
Storage - 8TB SAS HDD
CDDMJ
MGW91
LS21
HGST HUH721008AL5200
Firmware DUP
Storage - 8TB SAS HDD
KRDKK
MGW91
LS21
HGST HUS728T8TAL5200
Firmware DUP
Storage - 8TB SAS HDD
44YFV
6JJPV
RS07
SEAGATE ST8000NM014A
Firmware DUP
Storage - 8TB SAS HDD
0N660
61CR4
CSL7
SEAGATE ST8000NM015A
Firmware DUP
Storage - 8TB SAS HDD
K6646
RP50F
CSNC
SEAGATE ST8000NM0195
Firmware DUP
Storage - 8TB SAS HDD
DKGYV
WKRV9
PT74
SEAGATE ST8000NM0195
Firmware DUP
Storage - 8TB SAS HDD
KNYW0
N2X70
PT71
SEAGATE ST8000NM024B
Firmware DUP
Storage - 8TB SAS HDD
C5HD0
24J9V
LS0C
TOSHIBA MG06SCA800EY
Firmware DUP
Storage - 8TB SAS HDD
FV725
4G5GY
EH0D
KIOXIA KPM6WVUG960G
Firmware DUP
Storage - 960GB SAS SSD
J92FY
G42W2
BD48
KIOXIA KPM6WVUG960G
Firmware DUP
Storage - 960GB SAS SSD
WMWKG
G42W2
BD48
KIOXIA KPM7WRUG960G
Firmware DUP
Storage - 960GB SAS SSD
T1RHN
TXDJ9
C40A
KIOXIA KPM7XRUG960G
Firmware DUP
Storage - 960GB SAS SSD
KRVY1
25VW9
C106
SAMSUNG MZILG960HCHQAD9
Firmware DUP
Storage - 960GB SAS SSD
RX1JG
PFJ21
TBD
KIOXIA KPM5XVUG960G
Firmware DUP
Storage - 960GB SAS SSD
WFGTH
4P9DW
B026
KIOXIA KPM6WRUG960G
Firmware DUP
Storage - 960GB SAS SSD
R9RTY
G42W2
BD48
SEAGATE KRM6VVUG960G
Firmware DUP
Storage - 960GB SAS SSD
42XXC
3P4FR
BJ02
SEAGATE XS960LE70134
Firmware DUP
Storage - 960GB SAS SSD
2RDWT
G04C8
4S0C
SEAGATE XS960LE70154
Firmware DUP
Storage - 960GB SAS SSD
38R6V
84K2D
4D0B
TOSHIBA KPM5XVUG960G
Firmware DUP
Storage - 960GB SAS SSD
WFGTH
4P9DW
B026
TOSHIBA PX05SVB096Y
Firmware DUP
Storage - 960GB SAS SSD
503M7
1DJXX
AS10
INTEL S4520 RR M.2 SSD SSDSC2KB960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
F6H8H
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
8MHYH
J3GJ8
DL74
MICRON 5400 Pro M.2 SSD MTFDDAV960TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 960GB SATA SSD
KHRN0
C2Y7D
K002
KIOXIA KRM7VRUG960G
Firmware DUP
Storage - 960GB vSAS SSD
6RNXC
MRG3F
CA06
SEAGATE XS960LE70095
Firmware DUP
Storage - 960GB vSAS SSD
2M1NG
2PY92
CPE6
Expander Storage Backplane
Firmware DUP
Storage - Backplane
N/A
60K1J
2.52
Dell HBA330
Firmware DUP
Storage - HBA
N/A
124X2
16.17.01.00
uEFI diag
Tools-Software
Dell 64 Bit uEFI Diagnostics
N/A
Y5CF5
4301A38
14G Scale Units - PowerEdge R640 All Flash
Component
Type
Category
Dell Part Number (P/N)
Software Bundle (SWB)
Supported Version
INTEL C600/C610/C220/C230/C2000 Series
Driver DUP
Chipset
N/A
3DTYV
10.1.18807.8279
Mellanox ConnectX-4 LX / 25GbE
Driver DUP
Network / RDMA
N/A
Y31G3
03.20.02
Dell HBA330
Driver DUP
Storage - HBA
N/A
MF8G0
2.51.25.02
BIOS
Firmware DUP
BIOS
N/A
YM8R4
2.18.1
BOSS-S1 Firmware
Firmware DUP
BOSS-S1
N/A
3P39V
2.5.13.3024
CPLD
Firmware DUP
CPLD
N/A
9N4DH
1.0.6
iDRAC with Lifecycle Controller
Firmware DUP
iDRAC with Lifecycle Controller
N/A
Y0CWW
6.10.80.00
Mellanox ConnectX-4 LX / 25GbE
Firmware DUP
Network/RDMA
N/A
PY7FC
14.32.20.04
Dell SEP Non-expander Storage Backplane
Firmware DUP
Non-expander Storage Backplane
N/A
VV85D
4.35
KIOXIA KPM5XVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
2WVYG
4P9DW
B026
KIOXIA KPM6WVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
1081V
G42W2
BD48
KIOXIA KPM6WVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
DHWH5
G42W2
BD48
KIOXIA KPM7WRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
VGMCD
2H3MY
C406
KIOXIA KPM7XRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
6K35K
25VW9
C106
KIOXIA KRM6VVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
N15JP
3P4FR
BJ02
SAMSUNG MZILG1T9HCJRAD9
Firmware DUP
Storage - 1.92TB SAS SSD
MFCD7
PFJ21
TBD
SEAGATE XS1920LE70134
Firmware DUP
Storage - 1.92TB SAS SSD
N6DRV
G04C8
4S0C
SEAGATE XS1920LE70154
Firmware DUP
Storage - 1.92TB SAS SSD
91DPV
84K2D
4D0B
TOSHIBA KPM5XVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
2WVYG
4P9DW
B026
TOSHIBA PX05SVB192Y
Firmware DUP
Storage - 1.92TB SAS SSD
V0K7V
1DJXX
AS10
KIOXIA KRM7VRUG1T92
Firmware DUP
Storage - 1.92TB vSAS SSD
86XW7
MRG3F
CA06
SEAGATE XS1920LE70095
Firmware DUP
Storage - 1.92TB vSAS SSD
K805Y
2PY92
CPE6
KIOXIA KPM6XVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
NKM7P
6K5N9
BA0D
KIOXIA KPM7WVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
RGP9J
FKXKC
C406
KIOXIA KPM7XVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
V0X40
69KVR
C106
SAMSUNG MZILG3T2HCLSAD9
Firmware DUP
Storage - 3.2TB SAS SSD
5DVPV
G0NG4
DZG0
KIOXIA KPM5XVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
91W3V
4P9DW
B026
KIOXIA KPM6WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FH1W9
G42W2
BD48
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
MD4YN
G42W2
BD48
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
81H9C
G42W2
BD48
KIOXIA KPM7WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
YTVTF
TXDJ9
C40A
KIOXIA KPM7XRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
MT0R5
T7WXC
C10A
KIOXIA KRM6VVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FXYGR
3P4FR
BJ02
SAMSUNG MZILG3T8HCLSAD9
Firmware DUP
Storage - 3.84TB SAS SSD
H62RF
PFJ21
TBD
SEAGATE XS3840LE70134
Firmware DUP
Storage - 3.84TB SAS SSD
YM9HP
G04C8
4S0C
SEAGATE XS3840LE70154
Firmware DUP
Storage - 3.84TB SAS SSD
NWGX3
84K2D
4D0B
TOSHIBA KPM5XVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
91W3V
4P9DW
B026
TOSHIBA PX05SVB384Y
Firmware DUP
Storage - 3.84TB SAS SSD
3DDFT
1DJXX
AS10
KIOXIA KRM7VRUG3T84
Firmware DUP
Storage - 3.84TB vSAS SSD
VJNDD
MRG3F
CA06
SEAGATE XS3840LE70095
Firmware DUP
Storage - 3.84TB vSAS SSD
FT95M
2PY92
CPE6
SEAGATE XS3840LE70115
Firmware DUP
Storage - 3.84TB vSAS SSD
DHK7V
XTC9D
TBD
INTEL S3520 RI M.2 SSD SSDSCKJB480G7R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
WCP9P
CHJGV
DL43
INTEL S4510 RI M.2 SSD SSDSCKKB480G8R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7FXC3
Y1P10
DL6R
INTEL S4520 RR M.2 SSD SSDSC2KB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GX439
J3GJ8
DL74
INTEL S4520 RR M.2 SSD SSDSCKKB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
M7F5D
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
00DJ5
J3GJ8
DL74
MICRON 5100 Pro M.2 SSD MTFDDAV480TCB (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GPGC0
YM8KY
E013
MICRON 5300 Pro M.2 SSD MTFDDAV480TDS (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7RKD7
PWVX5
J004
MICRON 5400 Pro M.2 SSD MTFDDAV480TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 480GB SATA SSD
VN68H
C2Y7D
K002
INTEL S4520 RR M.2 SSD SSDSC2KB960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
F6H8H
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
8MHYH
J3GJ8
DL74
MICRON 5400 Pro M.2 SSD MTFDDAV960TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 960GB SATA SSD
KHRN0
C2Y7D
K002
Expander Storage Backplane
Firmware DUP
Storage - Backplane
N/A
60K1J
2.52
Dell HBA330
Firmware DUP
Storage - HBA
N/A
124X2
16.17.01.00
uEFI diag
Tools-Software
Dell 64 Bit uEFI Diagnostics
N/A
Y5CF5
4301A38
14G Scale Units - PowerEdge R640 Tactical
Component
Type
Category
Dell Part Number (P/N)
Software Bundle (SWB)
Supported Version
INTEL C600/C610/C220/C230/C2000 Series
Driver DUP
Chipset
N/A
3DTYV
10.1.18807.8279
Mellanox ConnectX-4 LX / 25GbE
Driver DUP
Network / RDMA
N/A
Y31G3
03.20.02
Dell HBA330
Driver DUP
Storage - HBA
N/A
MF8G0
2.51.25.02
BIOS
Firmware DUP
BIOS
N/A
YM8R4
2.18.1
BOSS-S1 Firmware
Firmware DUP
BOSS-S1
N/A
3P39V
2.5.13.3024
CPLD
Firmware DUP
CPLD
N/A
9N4DH
1.0.6
iDRAC with Lifecycle Controller
Firmware DUP
iDRAC with Lifecycle Controller
N/A
Y0CWW
6.10.80.00
Mellanox ConnectX-4 LX / 25GbE
Firmware DUP
Network/RDMA
N/A
PY7FC
14.32.20.04
Dell SEP Non-expander Storage Backplane
Firmware DUP
Non-expander Storage Backplane
N/A
VV85D
4.35
KIOXIA KPM5XVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
2WVYG
4P9DW
B026
KIOXIA KPM6WVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
1081V
G42W2
BD48
KIOXIA KPM6WVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
DHWH5
G42W2
BD48
KIOXIA KPM7WRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
VGMCD
2H3MY
C406
KIOXIA KPM7XRUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
6K35K
25VW9
C106
KIOXIA KRM6VVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
N15JP
3P4FR
BJ02
SAMSUNG MZILG1T9HCJRAD9
Firmware DUP
Storage - 1.92TB SAS SSD
MFCD7
PFJ21
TBD
SEAGATE XS1920LE70134
Firmware DUP
Storage - 1.92TB SAS SSD
N6DRV
G04C8
4S0C
SEAGATE XS1920LE70154
Firmware DUP
Storage - 1.92TB SAS SSD
91DPV
84K2D
4D0B
TOSHIBA KPM5XVUG1T92
Firmware DUP
Storage - 1.92TB SAS SSD
2WVYG
4P9DW
B026
TOSHIBA PX05SVB192Y
Firmware DUP
Storage - 1.92TB SAS SSD
V0K7V
1DJXX
AS10
KIOXIA KRM7VRUG1T92
Firmware DUP
Storage - 1.92TB vSAS SSD
86XW7
MRG3F
CA06
SEAGATE XS1920LE70095
Firmware DUP
Storage - 1.92TB vSAS SSD
K805Y
2PY92
CPE6
KIOXIA KPM7WVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
RGP9J
FKXKC
C406
KIOXIA KPM7XVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
V0X40
69KVR
C106
SAMSUNG MZILG3T2HCLSAD9
Firmware DUP
Storage - 3.2TB SAS SSD
5DVPV
G0NG4
DZG0
KIOXIA KPM5XVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
91W3V
4P9DW
B026
KIOXIA KPM6WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FH1W9
G42W2
BD48
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
81H9C
G42W2
BD48
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
MD4YN
G42W2
BD48
KIOXIA KPM7WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
YTVTF
TXDJ9
C40A
KIOXIA KPM7XRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
MT0R5
T7WXC
C10A
KIOXIA KRM6VVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FXYGR
3P4FR
BJ02
SAMSUNG MZILG3T8HCLSAD9
Firmware DUP
Storage - 3.84TB SAS SSD
H62RF
PFJ21
TBD
SEAGATE XS3840LE70154
Firmware DUP
Storage - 3.84TB SAS SSD
NWGX3
84K2D
4D0B
SEAGATE XS3840LE70134
Firmware DUP
Storage - 3.84TB SAS SSD
YM9HP
G04C8
4S0C
TOSHIBA KPM5XVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
91W3V
4P9DW
B026
TOSHIBA PX05SVB384Y
Firmware DUP
Storage - 3.84TB SAS SSD
3DDFT
1DJXX
AS10
KIOXIA KRM7VRUG3T84
Firmware DUP
Storage - 3.84TB vSAS SSD
VJNDD
MRG3F
CA06
SEAGATE XS3840LE70095
Firmware DUP
Storage - 3.84TB vSAS SSD
FT95M
2PY92
CPE6
SEAGATE XS3840LE70115
Firmware DUP
Storage - 3.84TB vSAS SSD
DHK7V
XTC9D
TBD
INTEL S3520 RI M.2 SSD SSDSCKJB480G7R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
WCP9P
CHJGV
DL43
INTEL S4510 RI M.2 SSD SSDSCKKB480G8R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7FXC3
Y1P10
DL6R
INTEL S4520 RR M.2 SSD SSDSC2KB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GX439
J3GJ8
DL74
INTEL S4520 RR M.2 SSD SSDSCKKB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
M7F5D
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
00DJ5
J3GJ8
DL74
MICRON 5100 Pro M.2 SSD MTFDDAV480TCB (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GPGC0
YM8KY
E013
MICRON 5300 Pro M.2 SSD MTFDDAV480TDS (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7RKD7
PWVX5
J004
MICRON 5400 Pro M.2 SSD MTFDDAV480TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 480GB SATA SSD
VN68H
C2Y7D
K002
INTEL S4520 RR M.2 SSD SSDSC2KB960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
F6H8H
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
8MHYH
J3GJ8
DL74
MICRON 5400 Pro M.2 SSD MTFDDAV960TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 960GB SATA SSD
KHRN0
C2Y7D
K002
Expander Storage Backplane
Firmware DUP
Storage - Backplane
N/A
60K1J
2.52
Dell HBA330
Firmware DUP
Storage - HBA
N/A
124X2
16.17.01.00
uEFI diag
Tools-Software
Dell 64 Bit uEFI Diagnostics
N/A
Y5CF5
4301A38
14G Scale Units - PowerEdge R840 Dense
Component
Type
Category
Dell Part Number (P/N)
Software Bundle (SWB)
Supported Version
INTEL Lewisburg C62x Series Chipset Drivers
Driver DUP
Chipset
N/A
3DTYV
10.1.18807.8279
Mellanox ConnectX-4 LX / 25GbE
Driver DUP
Network / RDMA
N/A
Y31G3
03.20.02
Dell HBA330
Driver DUP
Storage - HBA
N/A
MF8G0
2.51.25.02
BIOS
Firmware DUP
BIOS
N/A
PF39N
2.18.1
BOSS-S1 Firmware
Firmware DUP
BOSS-S1
N/A
3P39V
2.5.13.3024
CPLD
Firmware DUP
CPLD
N/A
67GJY
1.0.6
iDRAC with Lifecycle Controller
Firmware DUP
iDRAC with Lifecycle Controller
N/A
Y0CWW
6.10.80.00
Mellanox ConnectX-4 LX / 25GbE
Firmware DUP
Network/RDMA
N/A
PY7FC
14.32.20.04
Dell SEP Non-expander Storage Backplane
Firmware DUP
Non-expander Storage Backplane
N/A
VV85D
4.35
KIOXIA KPM6XVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
NKM7P
6K5N9
BA0D
KIOXIA KPM7WVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
RGP9J
FKXKC
C406
KIOXIA KPM7XVUG3T20
Firmware DUP
Storage - 3.2TB SAS SSD
V0X40
69KVR
C106
SAMSUNG MZILG3T2HCLSAD9
Firmware DUP
Storage - 3.2TB SAS SSD
5DVPV
G0NG4
DZG0
KIOXIA KPM5XVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
91W3V
4P9DW
B026
KIOXIA KPM6WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FH1W9
G42W2
BD48
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
MD4YN
G42W2
BD48
KIOXIA KPM6WVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
81H9C
G42W2
BD48
KIOXIA KPM7WRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
YTVTF
TXDJ9
C40A
KIOXIA KPM7XRUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
MT0R5
T7WXC
C10A
KIOXIA KRM6VVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
FXYGR
3P4FR
BJ02
SAMSUNG MZILG3T8HCLSAD9
Firmware DUP
Storage - 3.84TB SAS SSD
H62RF
PFJ21
TBD
SEAGATE XS3840LE70134
Firmware DUP
Storage - 3.84TB SAS SSD
YM9HP
G04C8
4S0C
SEAGATE XS3840LE70154
Firmware DUP
Storage - 3.84TB SAS SSD
NWGX3
84K2D
4D0B
TOSHIBA KPM5XVUG3T84
Firmware DUP
Storage - 3.84TB SAS SSD
91W3V
4P9DW
B026
TOSHIBA PX05SVB384Y
Firmware DUP
Storage - 3.84TB SAS SSD
3DDFT
1DJXX
AS10
KIOXIA KRM7VRUG3T84
Firmware DUP
Storage - 3.84TB vSAS SSD
VJNDD
MRG3F
CA06
SEAGATE XS3840LE70095
Firmware DUP
Storage - 3.84TB vSAS SSD
FT95M
2PY92
CPE6
SEAGATE XS3840LE70115
Firmware DUP
Storage - 3.84TB vSAS SSD
DHK7V
XTC9D
TBD
INTEL S3520 RI M.2 SSD SSDSCKJB480G7R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
WCP9P
CHJGV
DL43
INTEL S4510 RI M.2 SSD SSDSCKKB480G8R (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7FXC3
Y1P10
DL6R
INTEL S4520 RR M.2 SSD SSDSC2KB960GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GX439
J3GJ8
DL74
INTEL S4520 RR M.2 SSD SSDSCKKB480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
M7F5D
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG480GZR (Boot)
Firmware DUP
Storage - 480GB SATA SSD
00DJ5
J3GJ8
DL74
MICRON 5100 Pro M.2 SSD MTFDDAV480TCB (Boot)
Firmware DUP
Storage - 480GB SATA SSD
GPGC0
YM8KY
E013
MICRON 5300 Pro M.2 SSD MTFDDAV480TDS (Boot)
Firmware DUP
Storage - 480GB SATA SSD
7RKD7
PWVX5
J004
MICRON 5400 Pro M.2 SSD MTFDDAV480TGA-1BC1ZABDA (Boot)
Firmware DUP
Storage - 480GB SATA SSD
VN68H
C2Y7D
K002
INTEL S4520 RR M.2 SSD SSDSC2KB480GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
F6H8H
J3GJ8
DL74
INTEL S4620 RR M.2 SSD SSDSC2KG960GZR (Boot)
Firmware DUP
Storage - 960GB SATA SSD
8MHYH
J3GJ8
DL74
MICRON 5400 Pro M.2 SSD MTFDDAV960TGA-1BC1ZABDA (Boot)
1.4.1 - How to create a service principal name for Azure Stack Hub integrated with Active Directory Federation Services identity using PowerShell
Overview
This article explains how to create a service principal name (SPN) to manage Azure Stack Hub integrated with Active Directory Federation Services (AD FS) identity using PowerShell.
Overview of the creation process for Azure Stack Hub SPN
NOTE
The procedure provided is designed for Azure Stack Hub Operators as it requires Privileged Endpoint (PEP) access as well as assumes the Default Provider Subscription and the Administrator Azure Resource Manager endpoint as the defaults; however, the same mechanism can be applied to the User Subscriptions with minimal changes to the code.
If you want to assign a role the SPN for a User Subscription, replace the Administrator Azure Resource Manager endpoint with the Tenant Azure Resource Manager endpoint and the Default Provider Subscription with Subscription Name you want to modify.
Declare your variables accordingly.
Log in to your Azure Stack Hub Default Provider Subscription with administrator user credentials (needs to have the Owner role).
CAUTION
This requires interactive prompt as by default when using AD FS as your identity provider you cannot use user credentials in the non-interactive way.
This is the main reason why you would want to create an SPN so that you can automate your operations.
Create your AD FS application/service principal.
Assign the appropriate Role to your service principal.
NOTE
As a bonus, we include an example of how to assign the Owner role to an AD FS group
The current AzureStack modules do not support it natively, but this example will show you how to do it via API.
It is the preferred method of assigning roles, you should assign roles to a group rather than individual users.
Log in to your Azure Stack Hub Default Provider Subscription using the SPN account.
Verify SPN authentication and the role assignment.
Create Azure Stack Hub SPN
Create a PFX Certificate
#region Declare variables
$CertificateName = "ADFSAutomationCert"
$CertStore = "cert:\LocalMachine\My" # This can also be "cert:\CurrentUser\My" but in general service accounts cannot access CurrentUser cert store
$CertSubject = "CN=$CertificateName"
$PfxFilePath = "C:\Temp"
if (-not (Test-Path -Path $PfxFilePath)) {
New-Item -ItemType Directory -Path $PfxFilePath -Force | Out-Null
}
$PfxFilePathFull = Join-Path -Path $PfxFilePath -ChildPath "$($CertificateName).pfx"
$PfxPassword = '""' | ConvertTo-SecureString -AsPlainText -Force # replace "" with an actual password or leave "" for it to be blank
#endregion
#region Create certificate to pass into new Application
$ExpiryDate = (Get-Date).AddDays(365) # You can change this to whatever fits your security profile better, default is 1 year
$Cert = New-SelfSignedCertificate -CertStoreLocation $CertStore -Subject $CertSubject -KeySpec KeyExchange -NotAfter $ExpiryDate
Write-Verbose -Message "Certificate ""$($Cert.Subject)"" with start date ""$($Cert.NotBefore)"" and end date ""$($Cert.NotAfter)"" created at ""$($PfxFilePathFull)""."
#endregion
#region Get a cert object from a .pfx file - you need it to create the SPN to begin with
$Cert = Get-PfxCertificate -FilePath $PfxFilePathFull -Password $PfxPassword
#endregion
#region Optional steps
#region Export the certificate so that you can import it on other environments
try {
Export-PfxCertificate -Cert $Cert.PsPath -FilePath $PfxFilePathFull -Password $PfxPassword -ErrorAction Stop | Out-Null
} catch {
throw "Failed to export certificate to ""$($PfxFilePathFull)"":`n$($_.Exception.Message)"
}
#endregion
#region Import the certificate into the certificate store on another environment
Import-PfxCertificate -CertStoreLocation $CertStore -FilePath $PfxFilePathFull -Password $PfxPassword -Exportable
#endregion
#endregion
Create Azure Stack Hub SPN that uses certificate credential
#region Declare variables
$CertificateName = "ADFSAutomationCert"
$PfxFilePath = "C:\Temp"
$PfxFilePathFull = Join-Path -Path $PfxFilePath -ChildPath "$($CertificateName).pfx"
$PfxPassword = '""' | ConvertTo-SecureString -AsPlainText -Force
$CertificateObject = Get-PfxCertificate -FilePath $PfxFilePathFull -Password $PfxPassword
$CertificateThumbprint = $CertificateObject.Thumbprint
if (!$CertificateThumbprint) {
throw "Failed to obtain a thumbprint from certificate: $($PfxFilePathFull)"
}
$CloudAdminUsername = "CloudAdmin@azurestack.local"
[SecureString]$CloudAdminPassword = ConvertTo-SecureString "Password123!" -AsPlainText -Force
$ApplicationName = "ADFSAppCert"
$AzureStackRole = "Owner"
$ADGroupName = "AzureStackHubOwners"
$AzureStackAdminArmEndpoint = "https://adminmanagement.local.azurestack.external/"
$EnvironmentName = "AzureStackAdmin"
$PepCreds = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $CloudAdminUsername, $CloudAdminPassword
$PepIPAddress = "x.x.x.224" # e.g. 10.5.30.224
#endregion
#region Register and set an Az environment that targets your Azure Stack Hub instance
Write-Output -InputObject "Connecting to Azure Stack Hub Admin Management Endpoint - $(AzureStackAdminArmEndpoint)"
$null = Add-AzEnvironment -Name $EnvironmentName -ARMEndpoint $AzureStackAdminArmEndpoint
$null = Connect-AzAccount -Environment $EnvironmentName -UseDeviceAuthentication # Interactive prompt
if (((Get-AzContext).Subscription).Name -notlike "Default Provider Subscription") {
throw "Failed to obtain access to the 'Default Provider Subscription'. Please verify the user has been assigned the '$($AzureStackRole)' role for the 'Default Provider Subscription'."
}
#endregion
#region Create a PSSession to the Privileged Endpoint VM
Write-Output -InputObject "Create a PowerShell Session to the Privileged Endpoint VM"
$PepSession = New-PSSession -ComputerName $PepIPAddress -ConfigurationName PrivilegedEndpoint -Credential $PepCreds -SessionOption (New-PSSessionOption -Culture en-US -UICulture en-US)
#endregion
#region Check for existing SPN
Write-Output -InputObject "Check for existing SPN '$($ApplicationName)'"
$SPNObjectCheckJob = Invoke-Command -Session $PepSession -ScriptBlock { Get-GraphApplication } -AsJob | Wait-Job
if ($SPNObjectCheckJob.State -ne "Completed") {
throw "$($SPNObjectCheckJob.ChildJobs | Receive-Job)"
}
$SPNObjectCheck = $SPNObjectCheckJob.ChildJobs.Output | Where-Object { $_.Name -like "Azurestack-$ApplicationName*" } | Select-Object -Last 1
#endregion
#region Create new SPN if one does not exist
if ($SPNObjectCheck) {
Write-Output -InputObject "SPN details`n$($ApplicationName): $($SPNObjectCheck | Out-String)"
} else {
Write-Output -InputObject "No existing SPN found"
Write-Output -InputObject "Create new SPN '$($ApplicationName)'"
$SPNObjectJob = Invoke-Command -Session $PepSession -ScriptBlock { New-GraphApplication -Name $using:ApplicationName -ClientCertificates $using:CertificateObject } -AsJob | Wait-Job
if ($SPNObjectJob.State -ne "Completed") {
throw "$($SPNObjectJob.ChildJobs | Receive-Job)"
}
$SPNObject = $SPNObjectJob.ChildJobs.Output
Write-Output -InputObject "SPN details`n$($ApplicationName): $($SPNObject | Out-String)"
$FullApplicationName = $SPNObject.ApplicationName
#endregion
}
#region Assign SPN the 'Owner' role for the 'Default Provider Subscription'
Write-Output -InputObject "Assign SPN '$($ApplicationName)' the '$($AzureStackRole)' role for the 'Default Provider Subscription'"
if ($FullApplicationName) {
$SPNADFSApp = Get-AzADServicePrincipal | Where-Object { $_.DisplayName -like "$($FullApplicationName)" }
} else {
$SPNADFSApp = Get-AzADServicePrincipal | Where-Object { $_.DisplayName -like "*$($ApplicationName)*" } | Select-Object -Last 1
}
$SPNRoleAssignmentCheck = Get-AzRoleAssignment -ObjectId $SPNADFSApp.AdfsId
if (!($SPNRoleAssignmentCheck) -or ($SPNRoleAssignmentCheck.RoleDefinitionName -ne $AzureStackRole)) {
$null = New-AzRoleAssignment -RoleDefinitionName $AzureStackRole -ServicePrincipalName $SPNADFSApp.ApplicationId.Guid
#region Verify SPN has been assigned the 'Owner' role for the 'Default Provider Subscription'
$SPNRoleAssignment = Get-AzRoleAssignment -ObjectId $SPNADFSApp.AdfsId
if (!($SPNRoleAssignment) -or ($SPNRoleAssignment.RoleDefinitionName -ne $AzureStackRole)) {
throw "Failed to assign SPN '$($ApplicationName)' the '$($AzureStackRole)' role for the Default Provider Subscription"
}
#endregion
}
#endregion
#region Assign AD group 'AzureStackOwners' the 'Owner' role for the 'Default Provider Subscription'
Write-Output -InputObject "Assign AD group '$($ADGroupName)' the '$($AzureStackRole)' role for the 'Default Provider Subscription'"
$ADGroup = Get-AzADGroup -DisplayNameStartsWith $ADGroupName
$SubId = (Get-AzSubscription -SubscriptionName "Default Provider Subscription").Id
$OwnerRoleId = (Get-AzRoleDefinition -Name $AzureStackRole).Id
$APIPayloadHash = @{
"properties" = @{
"roleDefinitionId" = "/subscriptions/$($SubId)/providers/Microsoft.Authorization/roleDefinitions/$($OwnerRoleId)"
"principalId" = "$($ADGroup.AdfsId)"
}
} | ConvertTo-Json -Depth 50
$APIPath = "/subscriptions/$($SubId)/providers/Microsoft.Authorization/roleAssignments/$($OwnerRoleId)?api-version=2015-07-01"
$APIResponse = Invoke-AzRestMethod -Path $APIPath -Method "PUT" -Payload $APIPayloadHash
if ($APIResponse.StatusCode -ne "201") {
throw "Failed to create role assignment for ""$($ADGroup.DisplayName)"" in subscription ""$($SubId)"" with role ""$($AzureStackRole)"" and role ID ""$($OwnerRoleId)"""
}
#endregion
#region Verify AD group 'AzureStackOwners' has been assigned the 'Owner' role for the 'Default Provider Subscription'
$ADGroupRoleAssignment = Get-AzRoleAssignment -ObjectId $ADGroup.AdfsId
if (!($ADGroupRoleAssignment) -or ($ADGroupRoleAssignment.RoleDefinitionName -ne $AzureStackRole)) {
throw "Failed to assign AD group '$($ADGroupName)' the '$($AzureStackRole)' role for the 'Default Provider Subscription'"
}
#endregion
#region Obtain authentication information
# GUID of the directory tenant
$TenantId = (Get-AzContext).Tenant.Id
Write-Output -InputObject "TenantId: $($TenantId)"
Write-Output -InputObject ""
Write-Output -InputObject "ApplicationName: $($SPNADFSApp.DisplayName)"
Write-Output -InputObject ""
Write-Output -InputObject "ApplicationId: $($SPNADFSApp.ApplicationId.Guid)"
Write-Output -InputObject ""
Write-Output -InputObject "CertificateThumbprint: $($CertificateThumbprint)"
Write-Output -InputObject ""
Write-Output -InputObject "Admin ARM Endpoint: $($AzureStackAdminArmEndpoint)"
#endregion
#region Verify if SPN can authenticate to Azure Stack Hub Admin Management Endpoint
Write-Output -InputObject "Verify if SPN can authenticate to Azure Stack Hub Admin Management Endpoint"
$null = Clear-AzContext -Force
$null = Connect-AzAccount -Environment $EnvironmentName -ServicePrincipal -Tenant $TenantId -ApplicationId $SPNADFSApp.ApplicationId.Guid -CertificateThumbprint $CertificateThumbprint
if (((Get-AzContext).Subscription).Name -notlike "Default Provider Subscription") {
throw "Failed to obtain access to the 'Default Provider Subscription'. Please verify the SPN has been assigned the '$($AzureStackRole)' role for the 'Default Provider Subscription'."
} else {
Write-Output -InputObject "Your SPN can successfully authenticate with ARM Endpoint $($AzureStackAdminArmEndpoint) and has got access to the 'Default Provider Subscription'"
}
#endregion
#region Remove sessions
if ($PepSession) {
Write-Output -InputObject "Removing PSSSession to the Privileged Endpoint"
Remove-PSSession -Session $PepSession
}
$CheckContext = Get-AzContext | Where-Object { $_.Environment -like $EnvironmentName }
if ($CheckContext) {
Write-Output -InputObject "Disconnecting from AzS Hub Admin Management Endpoint: $($CheckContext.Environment.ResourceManagerUrl)"
$null = Disconnect-AzAccount
}
#endregion
CAUTION
Using a client secret is less secure than using an X509 certificate credential. Not only is the authentication mechanism less secure, but it also typically requires embedding the secret in the client app source code. As such, for production apps, you’re strongly encouraged to use a certificate credential.
#region Declare variables
$CloudAdminUsername = "CloudAdmin@azurestack.local"
[SecureString]$CloudAdminPassword = ConvertTo-SecureString "Password123!" -AsPlainText -Force
$ApplicationName = "ADFSAppCert"
$AzureStackRole = "Owner"
$ADGroupName = "AzureStackHubOwners"
$AzureStackAdminArmEndpoint = "https://adminmanagement.local.azurestack.external/"
$EnvironmentName = "AzureStackAdmin"
$PepCreds = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $CloudAdminUsername, $CloudAdminPassword
$PepIPAddress = "x.x.x.224" # e.g. 10.5.30.224
#endregion
#region Register and set an Az environment that targets your Azure Stack Hub instance
Write-Output -InputObject "Connecting to Azure Stack Hub Admin Management Endpoint - $(AzureStackAdminArmEndpoint)"
$null = Add-AzEnvironment -Name $EnvironmentName -ARMEndpoint $AzureStackAdminArmEndpoint
$null = Connect-AzAccount -Environment $EnvironmentName -UseDeviceAuthentication # Interactive prompt
if (((Get-AzContext).Subscription).Name -notlike "Default Provider Subscription") {
throw "Failed to obtain access to the 'Default Provider Subscription'. Please verify the user has been assigned the '$($AzureStackRole)' role for the 'Default Provider Subscription'."
}
#endregion
#region Create a PSSession to the Privileged Endpoint VM
Write-Output -InputObject "Create a PowerShell Session to the Privileged Endpoint VM"
$PepSession = New-PSSession -ComputerName $PepIPAddress -ConfigurationName PrivilegedEndpoint -Credential $PepCreds -SessionOption (New-PSSessionOption -Culture en-US -UICulture en-US)
#endregion
#region Check for existing SPN
Write-Output -InputObject "Check for existing SPN '$($ApplicationName)'"
$SPNObjectCheckJob = Invoke-Command -Session $PepSession -ScriptBlock { Get-GraphApplication } -AsJob | Wait-Job
if ($SPNObjectCheckJob.State -ne "Completed") {
throw "$($SPNObjectCheckJob.ChildJobs | Receive-Job)"
}
$SPNObjectCheck = $SPNObjectCheckJob.ChildJobs.Output | Where-Object { $_.Name -like "Azurestack-$ApplicationName*" } | Select-Object -Last 1
#endregion
#region Create new SPN if one does not exist
if ($SPNObjectCheck) {
Write-Output -InputObject "SPN details`n$($ApplicationName): $($SPNObjectCheck | Out-String)"
} else {
Write-Output -InputObject "No existing SPN found"
Write-Output -InputObject "Create new SPN '$($ApplicationName)'"
$SPNObjectJob = Invoke-Command -Session $PepSession -ScriptBlock { New-GraphApplication -Name $using:ApplicationName -GenerateClientSecret } -AsJob | Wait-Job
if ($SPNObjectJob.State -ne "Completed") {
throw "$($SPNObjectJob.ChildJobs | Receive-Job)"
}
$SPNObject = $SPNObjectJob.ChildJobs.Output
Write-Output -InputObject "SPN details`n$($ApplicationName): $($SPNObject | Out-String)"
$FullApplicationName = $SPNObject.ApplicationName
$SPNClientId = $SPNObject.ClientId
$SPNClientSecret = $SPNObject.ClientSecret | ConvertTo-SecureString -AsPlainText -Force
$SPNCreds = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $SPNClientId, $SPNClientSecret
#endregion
}
#region Assign SPN the 'Owner' role for the 'Default Provider Subscription'
Write-Output -InputObject "Assign SPN '$($ApplicationName)' the '$($AzureStackRole)' role for the 'Default Provider Subscription'"
if ($FullApplicationName) {
$SPNADFSApp = Get-AzADServicePrincipal | Where-Object { $_.DisplayName -like "$($FullApplicationName)" }
} else {
$SPNADFSApp = Get-AzADServicePrincipal | Where-Object { $_.DisplayName -like "*$($ApplicationName)*" } | Select-Object -Last 1
}
$SPNRoleAssignmentCheck = Get-AzRoleAssignment -ObjectId $SPNADFSApp.AdfsId
if (!($SPNRoleAssignmentCheck) -or ($SPNRoleAssignmentCheck.RoleDefinitionName -ne $AzureStackRole)) {
$null = New-AzRoleAssignment -RoleDefinitionName $AzureStackRole -ServicePrincipalName $SPNADFSApp.ApplicationId.Guid
#region Verify SPN has been assigned the 'Owner' role for the 'Default Provider Subscription'
$SPNRoleAssignment = Get-AzRoleAssignment -ObjectId $SPNADFSApp.AdfsId
if (!($SPNRoleAssignment) -or ($SPNRoleAssignment.RoleDefinitionName -ne $AzureStackRole)) {
throw "Failed to assign SPN '$($ApplicationName)' the '$($AzureStackRole)' role for the Default Provider Subscription"
}
#endregion
}
#endregion
#region Assign AD group 'AzureStackOwners' the 'Owner' role for the 'Default Provider Subscription'
Write-Output -InputObject "Assign AD group '$($ADGroupName)' the '$($AzureStackRole)' role for the 'Default Provider Subscription'"
$ADGroup = Get-AzADGroup -DisplayNameStartsWith $ADGroupName
$SubId = (Get-AzSubscription -SubscriptionName "Default Provider Subscription").Id
$OwnerRoleId = (Get-AzRoleDefinition -Name $AzureStackRole).Id
$APIPayloadHash = @{
"properties" = @{
"roleDefinitionId" = "/subscriptions/$($SubId)/providers/Microsoft.Authorization/roleDefinitions/$($OwnerRoleId)"
"principalId" = "$($ADGroup.AdfsId)"
}
} | ConvertTo-Json -Depth 50
$APIPath = "/subscriptions/$($SubId)/providers/Microsoft.Authorization/roleAssignments/$($OwnerRoleId)?api-version=2015-07-01"
$APIResponse = Invoke-AzRestMethod -Path $APIPath -Method "PUT" -Payload $APIPayloadHash
if ($APIResponse.StatusCode -ne "201") {
throw "Failed to create role assignment for ""$($ADGroup.DisplayName)"" in subscription ""$($SubId)"" with role ""$($AzureStackRole)"" and role ID ""$($OwnerRoleId)"""
}
#endregion
#region Verify AD group 'AzureStackOwners' has been assigned the 'Owner' role for the 'Default Provider Subscription'
$ADGroupRoleAssignment = Get-AzRoleAssignment -ObjectId $ADGroup.AdfsId
if (!($ADGroupRoleAssignment) -or ($ADGroupRoleAssignment.RoleDefinitionName -ne $AzureStackRole)) {
throw "Failed to assign AD group '$($ADGroupName)' the '$($AzureStackRole)' role for the 'Default Provider Subscription'"
}
#endregion
#region Obtain authentication information
# GUID of the directory tenant
$TenantId = (Get-AzContext).Tenant.Id
Write-Output -InputObject "TenantId: $($TenantId)"
Write-Output -InputObject ""
Write-Output -InputObject "ApplicationName: $($SPNADFSApp.DisplayName)"
Write-Output -InputObject ""
Write-Output -InputObject "ApplicationId: $($SPNADFSApp.ApplicationId.Guid)"
Write-Output -InputObject ""
Write-Output -InputObject "ClientSecret: $($SPNObject.ClientSecret)"
Write-Output -InputObject ""
Write-Output -InputObject "Admin ARM Endpoint: $($AzureStackAdminArmEndpoint)"
#endregion
#region Verify if SPN can authenticate to Azure Stack Hub Admin Management Endpoint
Write-Output -InputObject "Verify if SPN can authenticate to Azure Stack Hub Admin Management Endpoint"
$null = Clear-AzContext -Force
$null = Connect-AzAccount -Environment $EnvironmentName -ServicePrincipal -Tenant $TenantId -Credential $SPNCreds
if (((Get-AzContext).Subscription).Name -notlike "Default Provider Subscription") {
throw "Failed to obtain access to the 'Default Provider Subscription'. Please verify the SPN has been assigned the '$($AzureStackRole)' role for the 'Default Provider Subscription'."
} else {
Write-Output -InputObject "Your SPN can successfully authenticate with ARM Endpoint $($AzureStackAdminArmEndpoint) and has got access to the 'Default Provider Subscription'"
}
#endregion
#region Remove sessions
if ($PepSession) {
Write-Output -InputObject "Removing PSSSession to the Privileged Endpoint"
Remove-PSSession -Session $PepSession
}
$CheckContext = Get-AzContext | Where-Object { $_.Environment -like $EnvironmentName }
if ($CheckContext) {
Write-Output -InputObject "Disconnecting from AzS Hub Admin Management Endpoint: $($CheckContext.Environment.ResourceManagerUrl)"
$null = Disconnect-AzAccount
}
#endregion
2 - Azure Stack HCI
Dell Integrated System for Microsoft Azure Stack HCI
Delivered as an Azure service, run virtualized applications on-premises with full stack lifecycle management while easily connecting resources to Azure.
Refresh and modernize aging virtualization platforms
Integrate with Azure for hybrid capabilities
Provide compute and storage at remote branch offices
Deploy and manage Azure cloud and Azure Stack HCI anywhere with Azure Arc as a single control plane
2.1 - Planning Azure Stack
This documentation is written from a Sys-Admin point of view as an addition to the official documentation, with the intent to demonstrate to IT Professionals how it compares to traditional solutions and Windows Server with a focus on Dell portfolio.
2.1.1 - 01. Operating System
Planning Operating System
Storage Spaces Direct is technology, that is contained in both Azure Stack HCI OS and Windows Server Datacenter. It enables you to create hyperconverged cluster as there is a software storage bus, that enables every cluster node to access all physical disks in cluster.
Familiar for IT
Both operating systems are easy to use for Windows Server admins that are familiar with failover clustering as both systems are using traditional technologies (Failover Clustering, Hyper-V) while domain joined. Therefore all tools (such as Server Manager, MMC and Windows Admin Center) can be used for management.
Hyper-Converged infrastructure stack
Both Azure Stack HCI and Windows Server are using the same technology that is well known since Windows Server 2016 - Storage Spaces Direct. Storage Spaces Direct enables all servers to see all disks from every node, therefore Storage Spaces stack can define resiliency and place data (slabs) in different fault domains. In this case nodes. Since all is happening in software, devices like high-speed NVMe disks can be used and shared using software stack using high-speed RDMA network adapters.
Delivered as an Azure hybrid service
The difference between both products is in way the service is consumed. With Windows Server, it’s traditional “buy and forget” model, where you can have operating system that is supported for 5+5 years (main+extended support) and you can pay upfront (OEM License, EA License …). Azure Stack HCI licensing can be dynamic. Imagine investing into the system where you have 40 cores/node, but you will initially use 16 cores only - you can easily configure number of cores in DELL systems using Openmanage Integration in Windows Admin Center and then pay only for how much you consume.
Additionally you can purchase Windows Server licenses as subscription add-on
OS Lifecycle
The main difference is the way features are developed for each platform. Windows Server follows traditional development cycle (new version every 2.5-3years), while Azure Stack HCI follows cloud development cycle together with Windows Client OS (new version every year).
As result, new features are developed and delivered into Azure Stack HCI OS every year.
While both Windows Server and Azure Stack HCI operating systems can run on virtualization host, going forward the main focus will be Azure Stack HCI OS for hosts and Windows Server for guest workloads. For more information see the video below.
Comparison of Azure Stack HCI and Windows Server is available in official docs.
2.1.2 - 02. Supporting Infrastructure
Planning Supporting Infrastructure
There are several deployment sizes. Let’s split it into three main categories. While all three categories can be managed just with one management machine and PowerShell, with more clusters or racks, management of the infrastructure can be very complex task. We can assume, that with Azure Stack HCI hybrid capabilities, will more infrastructure move into cloud.
In many cases we hear, that due to security, DHCP is not allowed in server subnet. Limiting what server can receive IP address can be done with MAC Address Filtering.
Management infrastructure can be deployed in separate domain from hosted virtual machines to further increase security.
The minimum components are Domain Controller and Management machine. Management machine can be Windows 10 or Windows Server at least the same version as the managed server (for example Windows 10 1809 and newer can manage Windows Server 2019). DHCP server can significantly help as managed servers can receive IP address. That means you can manage them remotely without logging into servers to configure static IP, but it’s not mandatory.
Windows Admin Center can be installed on Admin Workstation. From there, infrastructure can be managed using Windows Admin Center, PowerShell or legacy remote management tools (such as mmc).
Medium infrastructure
Medium infrastructure assumes you have multiple administrators and/or multiple clusters in your environment. Another servers dedicated for management can be introduced to help with management centralization or automating management.
Large Infrastructure assumes that you have more clusters spanning multiple racks or even sites. To help with bare-metal deployment, network management, patch management is SCVMM essential. Supporting roles (WSUS, WDS, Library servers) managed by SCVMM can be deployed across multiple servers. SCVMM supports deployment in HA Mode (Active-Passive) with SQL server Always On. DHCP is mandatory for bare-metal deployment as during PXE boot, server needs to obtain IP Address.
2.1.3 - 03. Planning Deployment Models and Workloads
Planning Deployment Models and Workloads
Depending on size, usage and complexity of the environment you need to design what deployment model for Azure Stack HCI you want to choose. HyperConvered deployment is the simplest. It’s great for it’s simplicity, however for specialized tasks (like CPU/RAM consuming Virtual Machines) with moderate/high storage workload it might be more effective to split CPU/RAM intensive workload and storage into Converged deployment.
HyperConverged deployments
HyperConverged deployments can be small as 2 nodes connected directly with network cable and grow to multi-PB 16 node clusters (unlike traditional clusters, where limit is 64 nodes). Minimum requirements are described in hardware requirements doc.
Simplicity is the main benefit in this deployment model. All hardware is standardized and from one vendor, therefore there is a high chance that there are hundreds of customers with exact same configuration. This significantly helps with troubleshooting. There are no extra hops compared to SAN, where some IOs are going over FC infrastructure and some over LAN (CSV redirection).
Converged deployments
Converged deployments have separate AzSHCI cluster with Scale-Out File Server role installed. Multiple compute clusters (up to 64 nodes each) can access single Scale-Out File Server. This design allows to use both Datacenter and Standard licenses for Compute Clusters.
This design adds some complexity as Virtual Machines are accessing its storage over network. Main benefit is, that one VM consuming all CPU cycles will not affect other VMs because of degraded storage performance and also you can scale storage independently from RAM and CPU (if you run out of cpu, no need to buy server loaded with storage). This design allows higher density, better deduplication job schedule and decreased east-west traffic (as VMs are pointed to it’s CSV owner node using Witness Service or new SMB Connections move on connect).
Cluster Sets
If multiple clusters are using multiple Scale-Out FileServers or even if multiple HyperConverged clusters are present, cluster sets helps putting all clusters under one namespace and allows to define fault domains. When VM is created, fault domain can be used (instead of pointing VM to specific node/cluster).
Technically all VMs are located on SOFS share that is presented using DFS-N namespace. This namespace is hosted on Management cluster that does not need any shared storage as all configuration data are in registry.
User Profile Disks host
Azure Stack HCI can also host user profile disks (UPDs). Since UPD is VHD (both native Windows Server functionality and FSLogix), Scale-Out File Server can be used as workload pattern is the same as for Virtual Machines. However it might make sense to use fileserver hosted Virtual Machine.
SQL
There are multiple ways to deploy SQL Server on Azure Stack HCI cluster. But in the end there are two main - Deploying a SQL Server in a Virtual Machine, or in AKS (Azure Kubernetes Service) as SQL Managed instance.
SQL Performance in one Virtual Machine (out of 40 on 4 node cluster) running SQL workload (database forced to read from disk)
Kubernetes
TBD
VDI
TBD
2.1.4 - 04. Planning Network Architecture
Planning Network Architecture
To correctly plan infrastructure design is key part in Azure Stack HCI planning. With incorrect configuration, the infrastructure might not be reliable under load. Depending on scale more complex solution might make sense to better control traffic.
In general there are two types of traffic - East-West and North-South. East-West is handled by SMB protocol (all traffic generated by Storage Bus Layer and Live Migration). North-South is mostly traffic generated by Virtual Machines.
Physicals switches should be configured with native VLAN for management traffic. This will significantly help as without configuring VLAN on physical host, you will be able to communicate over network. This helps bare metal deployment and also helps with Virtual Switch creation when management network is using vNIC.
In text above were several abbreviations used. Let’s explain few.
pSwitch = Physical Switch. It’s your Top of the rack Switch (TOR)
vSwitch = Virtual Switch. It is switch, that is created on host using New-VMSwitch command
vNIC = Virtual Network Adapter. It is a vNIC that is connected to Management OS (to parent partition). This is the NIC that is usually used for management or SMB.
vmNIC = Virtual Machine Network Adapter. This is a vNIC connected to Virtual Machine.
Topology design
Single subnet
In Windows Server 2016 was support for single subnet multichannel in cluster support added. This allows to configure only single subnet for multiple network adapters dedicated for SMB Traffic. It is recommended topology design for smaller deployments, where interconnection between TOR switches can handle at least 50% network throughput generated by nodes (as there is 50% chance, that traffic travel using switch interconnect - m-LAG). For example with 4 nodes each node 2 times 25Gbps connections, you should have at least 100Gbps connection between TOR switches.
TOR Switches will be configured with Trunk and native (access) VLAN for management.
Two subnets
With increased number of nodes, there might be a congestion in TOR switches interconnect. Also in case congestion will happen and pause frame will be sent, both switches will be paused. To mitigate both, you can configure 2 subnets - each network switch will host separate subnet. This also brings one benefit - in converged setup if connection fails, it will be visible in failover cluster manager. m-LAG is optional if switches are dedicated for East-West (SMB) only. In this case as there is no traffic generated from SMB multichannel as each SMB adapter is in different subnet. In case VMs or any other traffic is using it, m-LAG is required.
TOR Switches will be configured with Trunk and native (access) VLAN for management with one slight difference from single subnet. Each subnet for SMB traffic will have it’s own VLAN. This will also help discover disconnected physical connections (https://youtu.be/JxKMSqnGwKw?t=204).
Note: Two subnet deployment is being now standard. Same approach is used when NetworkATC is deployed.
Direct connections (Switchless)
In Windows Server 2019 you can connect all nodes in mesh mode. In case you have 2 nodes, it’s just one connection. With 3 nodes, it’s 3 interconnects. With 5 nodes, it whoops to 10. For 2 or 3 nodes design it makes sense to use 2 connections between 2 nodes in case one link goes down (for example cable failure). This would result traffic going over slower connection (like 1Gb if North-South is using Gigabit network links). Dell supports up to 4 nodes in switchless configuration.
The math is simple. With 5 nodes its 4+3+2+1=10. Each connection requires separate subnet.
# Calculation for number of connections
$NumberOfNodes = 5
(1..($NumberOfNodes - 1) | Measure-Object -Sum).Sum
RDMA Protocols
RDMA is not required for Azure Stack HCI, but it is highly recommended. It has lower latency as traffic is using hardware data path (application can send data directly to hardware using DMA).
There are multiple flavors of RDMA. The most used in Azure Stack HCI are RoCEv2 and iWARP. Infiniband can be used also, but just for SMB traffic (NICs cannot be connected to vSwitch).
iWARP
iWARP is using TCP for transport. This is bit easier to configure as it uses TCP for Congestion Control. Configuring DCB/ETS is not mandatory. For larger deployments it is recommended as traffic can be prioritized.
Some network vendors require to configure Jumbo Frames to 9014.
RoCE
RoCE is using UDP for transport. It is mandatory to enable DCB (PFC/ETS) and ECN on both physical NICs and physical network infrastructure.
If congestion control mechanisms are not correctly implemented, it can lead to huge retransmits. This can lead to infrastructure instabilities and storage disconnections. It is crucial to configure this correctly.
where DCB needs to be configured
Virtual Switch and Virtual Network adapters
Converged Design
This design is most common as it is simplest and requires just two ports. Since RDMA can be enabled on vNICs. In the example below is one VLAN used for SMB vNICs. As mentioned in above text, you may consider using two VLANs and two subnets for SMB vNICs to control traffic flow as it is becoming standard.
Converged design also makes best use of capacity (let’s say you have 4x25Gbps NICs), you can then use up to 100Gbps capacity for storage or Virtual Machines, while using latest technology such as VMMQ.
Dedicated NICs for East-West traffic
Some customers prefer to dedicate physical network adapters for east west traffic. In example below all physical ports on physical switch are configured the same (for simplicity). Also just two physical switches are used. You can also have dedicated switches for east-west traffic (for SMB). If DCB is configured, VLANs are mandatory for SMB adapters. In example below one VLAN for SMB is used. Two VLANs and two subnets can be used to better control traffic.
Dedicated NICs for East-West traffic and management
Some customers even prefer to have dedicated network cards (ports) for management. One of the reason can be customers requirements to have dedicated physical switches for management.
Network adapters hardware
Network adapters that support all modern features such as VMMQ or SDN offloading are in Hardware Compatibility list listed as Software-Defined Data Center (SDDC) Premium Additional Qualifier. For more information about Hardware Certification for Azure Stack HCI you can read this 2 part blog -
part1, part2.
2.1.5 - 05. Storage Capacity Planning
Planning capacity
Capacity reserve
When disk failure happens, it is necessary to have some capacity reserve to have immediate capacity to rebuild to. So for example if one disk in one node disconnects, there will be reserved capacity to have required number of copies (2 copies in 2-way mirror, 3 copies in 3-way mirror).
It is recommended to have largest disk capacity in each node not occupied - reserved. For calculation you can use http://aka.ms/s2dcalc. It is not necessary to mark disk as “reserved” or anything like that as it all about not consuming capacity of one disk.
Since we regular maintenance is required (security patches), reboot might be necessary. Or for example if any hardware upgrade is done (for example increasing RAM), node might need to be put into maintenance mode or even shut down. If VMs are required to run, then there has to be capacity (RAM) to keep VMs running on rest of the nodes.
With more than 5 nodes it might make sense to reserve entire node. You will have capacity for VMs when node in maintenance, and you will be also able to rebuild if one node is completely lost - assuming all disks damaged (which is usually unlikely to happen as usually just one component fails and can be replaced withing service agreement).
Resiliency options
Mirror (two-way and three-way)
Two-way mirroring writes two copies of everything. Its storage efficiency is 50% - to write 1TB of data, you need at least 2TB of physical storage capacity. Likewise, you need at least two fault domains. By default, fault domain is Storage Scale Unit (which translates into server node). Fault domain can be also Chassis or Rack. Therefore if you have two node cluster, two-way mirroring will be used.
With three-way mirror, the storage efficiency is 33.3% - to write 1TB of data, you need at least 3TB of physical storage capacity. Likewise you need to have at least three fault domains. If you have 3 nodes, by default three-way mirror will be used.
Dual-parity
Dual parity implements Reed-Solomon error-correcting codes to keep two bitwise parity symbols, thereby providing the same fault tolerance as three-way mirroring (i.e. up to two failures at once), but with better storage efficiency. It most closely resembles RAID-6.
To use dual parity, you need at least four hardware fault domains – with Storage Spaces Direct, that means four servers. At that scale, the storage efficiency is 50% – to store 2 TB of data, you need 4 TB of physical storage capacity.
With increasing number of fault domains (nodes), local reconstruction codes, or LRC can be used. LRC can decrease rebuild times as only local (local group) parity can be used to rebuild data (there is one local and one global parity in dataset).
Mirror-Accelerated Parity
Spaces Direct volume can be part mirror and part parity. Writes land first in the mirrored portion and are gradually moved into the parity portion later. Effectively, this is using mirroring to accelerate erasure coding.
To mix three-way mirror and dual parity, you need at least four fault domains, meaning four servers.
The storage efficiency of mirror-accelerated parity is in between what you’d get from using all mirror or all parity, and depends on the proportions you choose
With increasing number of nodes it might be useful to put data only on selected nodes to better control what data will be accessible in case of failure of certain nodes. With scoped volumes, volumes can system tolerate more than 2 nodes failure while keeping volumes online.
Cache drives
Faster media can be used as cache. If HDDs are used, cache devices are mandatory. Cache drives do not contribute to capacity.
In hyperconverged systems, CPU handles both VMs and Storage. Rule of thumb is that each logical processor can handle ~60MiB IOPS. Let’s calculate an exaple: four node cluster, each node two twelve-core CPUs results. If we consider 4k IOPS, each LP can handle ~15k IOPS. With 4 nodes, 24 LPs each it results in ~1.5M IOPS. All assuming that CPU is used for IO operations only.
Storage devices
In general, there are two kinds of devices - spinning media and solid state media disks. We all know this story as it’s been some time we upgraded our PCs with SSDs and we were able to see the significant latency drop. There are two factors though - type of media (HDD or SSD) and type of bus (SATA, SAS, NVMe or Storage Class Memory -SCM).
HDD mediatype is always using SATA or SAS. And this type of bus was more than enough for it’s purpose. With introduction of SSD mediatype, SATA/SAS started to show it’s limitation. Namely with SATA/SAS you will utilize 100% of your CPU and you will not be able to reach more than ~300k IOPS. It’s because SATA/SAS was designed for spinning media and also one controller connects multiple devices to one PCIe connection. NVMe was designed from scratch for low latency and parallelism and has dedicated connection to PCIe. Therefore NAND NVMe outperforms NAND SATA/SAS SSD drive.
Another significant leap was introduction of Intel Optane SSD, that introduces even lower latencies than NAND SSDs. And since in Optane media is bit addressable, there is no garbage to collect (on NAND SSD you erase only in blocks with negative performance impact).
Important piece when selecting storage devices is, that if you consider SSD+HDD combination, all heavy lifting will end up in one SATA/SAS controller connected into one PCIe slot. Therefore it’s recommended to consider using NVMe instead, as each NVMe will have its PCIe line.
Network cards
There are several considerations when talking about network cards.
Network Interface Speed
Network Cards are coming in speeds ranging from 1Gbps to 200Gbps. While hyperconverged infrastructure will work with 1Gbps, the performance will be limited. The requirement is to have at least one 10Gbps port per server. However it’s recommended to have at least 2x10Gbps with RDMA enabled.
Mediatype
Recommended NICs
SSD as cache or SSD all-flash
2×10 Gbps or 2x25Gbps
NVMe as cache
2-4×25Gbps or 2×100Gbps
NVMe all-flash
2-4×25Gbps or 2×100Gbps
Optane as cache
2-4×100 Gbps or 2×200Gbps
Use of RDMA
When RDMA is enabled, it will bypass networking stack and DMA directly into memory of NIC. This will significantly reduce CPU overhead. While RDMA is not mandatory, it’s highly recommended for Azure Stack HCI as it will leave more CPU for Virtual Machines and Storage.
RDMA protocol
There are two flavors of RDMA. iWARP (TCP/IP) and RoCE (UDP). The main difference a need of lossless infrastructure for RoCE as when switch is loaded and starts dropping packets, it cannot prioritize or even notify infrastructure to stop sending packets if DCB/PFC/ETS is not configured. When packet is dropped on UDP, large retransmit needs to happen and this cause even higher load on switches. Retransmit will also happen on TCP/IP, but significantly smaller. It is still recommended to configure PFC/ETS on both if possible - in case switch needs to notify infrastructure to stop sending packets.
Network infrastructure
Reliable, low latency infrastructure is a must for reliable function of Converged and HyperConverged infrastructure. As already covered above, DCB (PFC nad ETS) is recommended for iWARP and required for RoCE. There is also alternative - starting Windows Server 2019, direct connection is supported. As you can see, it does not make sense to have more than 5 nodes in the cluster (with increasing number of interconnects)
Number of nodes
Number of direct connections
2
1
3
3
4
6
5
10
Note: Dell supports up to 4 nodes in switchless configuration
Hardware certification programme
It is very important to follow validated hardware path. This way you can avoid ghost hunting when single component will misbehave due to firmware or even hardware not being able to handle load under high pressure. There is very good blog summarizing importance of validated hardware part1part2. Validated solutions are available in Azure Stack HCI Catalog. For Azure Stack HCI you can also consider Integrated System which includes the Azure Stack HCI operating system pre-installed as well as partner extensions for driver and firmware updates.
Note: Dell sells only Integrated Systems as Microsoft highly recommend those over just verified solutions.
2.2 - Storage Stack
2.2.1 - Storage Stack Overview
Understanding storage stack is crucial for understanding what technologies are involved and how (where storage replica is, where is ReFS Multi-resilient Volume, …). Understanding how layers are stacked will also help when IO flow is troubleshooted - like reviewing performance counters or troubleshooting core functionality.
Traditional stack compared to storage spaces stack (note that MPIO is missing, but for Storage Spaces Direct it’s not needed as there is only one path to the physical device, so it was omitted)
You can notice 4 “new” layers, but actually it’s just Spaces layer (Spaceport) and Storage Bus Layer.
To better understand what’s in the stack, you can also explore some parts with PowerShell
Anyway, let’s explore layers a bit. Following info is based on storage description someone somewhere created and pushed to internet. The only version found was from webarchive and can be accessed here.
Layers below S2D Stack
Port & Miniport driver
storport.sys & stornvme.sys
Port drivers implement the processing of an I/O request specific to a type of I/O port, such as SATA, and are implemented as kernel-mode libraries of functions rather than actual device drivers. Port driver is written by Microsoft (storport.sys). If third party wants to use write their own device driver (like HBA), then it will use miniport driver (except if device is NVMe, then miniport driver is Microsoft stornvme.sys)
Miniport drivers usually use storport. performance enhancements such as support for the paralell execution of IO.
A storage class driver (typically disk.sys) uses the well-established SCSI class/port interface to control a mass storage device of its type on any bus for which the system supplies a storage port driver (currently SCSI, IDE, USB and IEEE 1394). The particular bus to which a storage device is connected is transparent to the storage class driver.
Storage class driver is responsible for claiming devices, interpreting system I/O requests and many more
In Storage Spaces stack (Virtual Disk) disk.sys is responsible for claiming Virtual Disk exposed by spaceport (storage spaces)
Partition Manager
partmgr.sys
Partitions are handled by partmgr.sys. Partition is usually GPT or MBR (preferably GPT as MBR has many limitations such as 2TB size limit)
As you can see in the stack, there are two partition managers. One partition layout is on physical disk and it is then consumed by storage spaces (spaceport).
On the picture below you can see individual physical disk from spaces exposed and it’s partitions showing metadata partition and partition containing pool data (normally not visible as it’s hidden by partmgr.sys when it detects spaces).
S2D Stack
Storage Bus Layer
clusport.sys and clusblft.sys
These two drivers (client/server) are exposing all physical disk to each cluster node, so it looks like all physical disks from every cluster node are connected to each server. For interconnect is SMB used, therefore high-speed RDMA can be used (recommended).
It also contains SBL cache.
Spaceport
spaceport.sys
Claims disks and adds them to storage spaces pool. It creates partitions where internal data structures are metadata are kept (see screenshot in partition manager).
Defines resiliency when volume (virtual disk) is created (creates/distributes extents across physical disks)
Virtual Disk
disk.sys is now used by storage spaces and exposes virtual disk that was provisioned using spaceport.sys
Layers above S2D Stack
Volume Manager
dmio.sys, volmgr.sys
Volumes are created on top of the partition and on volumes you can then create filesystems and expose it to the components higher in the stack.
Volume Snapshot
volsnap.sys
Volsnap is the component that creates system provider for the volume shadow copy service (VSS). This service is controller by vssadmin.exe
BitLocker
fvevol.sys
BitLocker is well known disk encryption software that is on the market since Windows Vista. In PowerShell you can expose volume status with Get-BitLockerVolume command.
Filter Drivers
Interesting about filter drivers is, that all FileSystem drivers are actually filter drivers - special ones, File System Drivers - like REFS.sys, NTFS.sys, Exfat.sys.
You can learn more about filesystem using fsutil
There are also many first party and third party filter drivers. You can expose those with fltmc command
As you can see on above example, there are many filters like Cluster Shared Volume (CsvNSFlt, CsvFLT), deduplication (Dedup), shared vhd (svhdxflt), storage QoS (storqosflt) and many more. Each filter driver has defined altitude and 3rd parties can reserve theirs.
While SATA is still well performing for most of the customers (see performance results), NVMe offers benefit of higher capacity and also more effective protocol (AHCI vs NVMe), that was developed specifically for SSDs (opposite to AHCI, that was developed for spinning media). SATA/SAS is however not scaling well with the larger disks.
There is also another aspect of performance limitation of SATA/SAS devices - the controller. All SATA/SAS devices are connected to one SAS controller (non-raid) that has limited speed (only one PCI-e connection).
Drive Connector is universal (U2, also known as SFF-8639)
NVMe backplane connection - Example AX7525 - 16 PCIe Gen4 lanes in each connection (8 are used), 12 connections in backplane, in this case no PCIe switches.
SSDs were originally created to replace conventional rotating media. As such they were designed to connect to the same bus types as HDDs, both SATA and SAS (Serial ATA and Serial Attached SCSI).
However, this imposed speed limitations on the SSDs. Now a new type of SSD exists that attaches to PCI-e. Known as NVMe SSDs or simply NVMe.
When combining multiple media types, faster media will be used as caching. While it is recommended to use 10% of the capacity for cache, it should be noted, that it is just important to not spill the cache with the production workload, as it will dramatically reduce performance. Therefore all production workload should fit into the Storage Bus Layer Cache (cache devices). The sweet spot (price vs performance) is combination of fast NVMe (mixed use or write intensive) with HDDs. For performance intensive workloads it’s recommended to use all-flash solutions as caching introduces ~20% overhead + less predicable behavior (data can be already destaged…), therefore it is recommended to use All-Flash for SQL workloads.
In Dell Servers are BOSS (Boot Optimized Storage Solution) cards used. In essence it card wih 2x m2 2280 NVMe disks connected to PCI-e with configurable non-RAID/RAID 1
Consumer-Grade SSDs
You should avoid any consumer grade SSDs as consumer grade SSDs might contain NAND with higher latency (therefore there can be performance drop after spilling FTL buffer) or because consumer grade SSDs are not power protected (PLP). You can learn more about why consumer-grade SSDs are not good idea in a blog post. Consumer-grade SSDs do also have lower DWPD (Disk Written Per Day). You can learn about DWPD in this blogpost
From screenshot you can see, that AX640 BOSS card reports as SATA device with Unspecified Mediatype, while SAS disks are reported as SSDs, with SAS BusType. Let’s deep dive into BusType/MediaType a little bit (see table below)
Storage Spaces requires BusType SATA/SAS/NVMe or SCM. BusType RAID is unsupported.
You can also see Logical Sector Size and Physical Sector size. This refers to Drive Type (4K native vs 512E vs 512).
Once disk is added to storage spaces, S.M.A.R.T. attributes can be filtered out. For reading disk status (such as wear level temperatures…) can be get-storagereliability counter used.
2.3.1.1 - Azure Stack HCI Support Matrix for 14G-15G (2409)
Notes and warnings
CAUTION
The upgrade of Azure Stack HCI, version 22H2 to Azure Stack HCI, version 23H2 is currently not supported by Dell. Dell customers are recommended to wait for Dell to complete the upgrade validation prior to attempting the upgrade. Customers that choose to proceed with the upgrade need to contact Microsoft for any assistance with resolving problems that may occur during the upgrade process.
Dell also recommends that any customer choosing to upgrade their Azure Stack HCI clusters running HCI OS 22H2 to HCI OS 23H2 should update the cluster node to the drivers and firmware versions listed in the 2409 support matrix prior to upgrading.
For new deployments, we recommend that you use Azure Stack HCI, version 23H2 which is now generally available. For more information on Azure Stack HCI, version 23H2, see Use Azure Update Manager to update your Azure Stack HCI, version 23H2.
Supported Platforms
Model
Supported Operating System
AX-4510c
Windows Server 2022 Datacenter Azure Stack HCI-22H2 Azure Stack HCI-23H2
AX-4520c
Windows Server 2022 Datacenter Azure Stack HCI-22H2 Azure Stack HCI-23H2
AX-640
Windows Server 2019 Datacenter Azure Stack HCI-22H2 Windows Server 2022 Datacenter
AX-740xd
Windows Server 2019 Datacenter Azure Stack HCI-22H2 Windows Server 2022 Datacenter
AX-6515
Windows Server 2019 Datacenter Azure Stack HCI-22H2 Windows Server 2022 Datacenter Azure Stack HCI-23H2
AX-7525
Windows Server 2019 Datacenter Azure Stack HCI-22H2 Windows Server 2022 Datacenter Azure Stack HCI-23H2
AX-650
Windows Server 2022 Datacenter Azure Stack HCI-22H2 Azure Stack HCI-23H2
AX-750
Windows Server 2022 Datacenter Azure Stack HCI-22H2 Azure Stack HCI-23H2
2.3.2.1 - Azure Stack HCI Support Matrix for 14G-15G (2406)
Notes and warnings
CAUTION
The upgrade of Azure Stack HCI, version 22H2 to Azure Stack HCI, version 23H2 is currently not supported by Dell. Dell customers are recommended to wait for Dell to complete the upgrade validation prior to attempting the upgrade. Customers that choose to proceed with the upgrade need to contact Microsoft for any assistance with resolving problems that may occur during the upgrade process.
Dell also recommends that any customer choosing to upgrade their Azure Stack HCI clusters running HCI OS 22H2 to HCI OS 23H2 should update the cluster node to the drivers and firmware versions listed in the 2409 support matrix prior to upgrading.
For new deployments, we recommend that you use Azure Stack HCI, version 23H2 which is now generally available. For more information on Azure Stack HCI, version 23H2, see Use Azure Update Manager to update your Azure Stack HCI, version 23H2.
Supported Platforms
Model
Supported Operating System
AX-640
Windows Server 2019 Datacenter Azure Stack HCI-21H2 Azure Stack HCI-22H2 Windows Server 2022 Datacenter
AX-740xd
Windows Server 2019 Datacenter Azure Stack HCI-21H2 Azure Stack HCI-22H2 Windows Server 2022 Datacenter
AX-6515
Windows Server 2019 Datacenter Azure Stack HCI-21H2 Azure Stack HCI-22H2 Windows Server 2022 Datacenter Azure Stack HCI-23H2
AX-7525
Windows Server 2019 Datacenter Azure Stack HCI-21H2 Azure Stack HCI-22H2 Windows Server 2022 Datacenter Azure Stack HCI-23H2
AX-650
Windows Server 2022 Datacenter Azure Stack HCI-21H2 Azure Stack HCI-22H2 Azure Stack HCI-23H2
AX-750
Windows Server 2022 Datacenter Azure Stack HCI-21H2 Azure Stack HCI-22H2 Azure Stack HCI-23H2
2.3.3.1 - Azure Stack HCI Firmware and Driver Matrix for Legacy Windows Server Operating Systems (Windows Server 2016)
Introduction
This matrix is for Windows Server operating systems that have exited Microsoft mainstream support. This Windows Server operating system is no longer being validated by Dell for use with hyperconverged cluster deployments. The table in the below link is a snapshot of the last firmware and driver versions that were validated by Dell engineering for use with this legacy Windows Server version.
Customers that are still running this Windows Server version for their hyperconverged cluster deployments are encouraged to perform an in-place upgrade to the Windows Server 2022 operating system per instructions at the following link.
Azure Stack Docs is an open-source project and we thrive to build a welcoming and open community for anyone who wants to use the project or contribute to it.
Contributing to Azure Stack Docs
Become one of the contributors to this project!
You can contribute to this project in several ways. Here are some examples:
Test your changes locally and make sure it is not breaking anything.
Code reviews
All submissions, including submissions by project members, require review.
We use GitHub pull requests for this purpose.
Branching strategy
The Azure Stack documentation portal follows a release branch strategy where a branch is created for each release and all documentation changes made for a release are done on that branch. The release branch is then merged into the main branch at the time of the release. In some situations it may be sufficient to merge a non-release branch to main if it fixes some issue in the documentation for the current released version.
By default, local changes will be reflected at http://localhost:1313/. Hugo will watch for changes to the content and automatically refreshes the site.
Note: To bind it to different server address use hugo server --bind 0.0.0.0, default is 127.0.0.1
After testing the changes locally, raise a pull request after editing the pages and pushing it to GitHub.
Hardcoded relative links like [troubleshooting observability](../../observability/troubleshooting.md) will behave unexpectedly compared to how they would work on our local file system.
To avoid broken links in the portal, use regular relative URLs in links that will be left unchanged by Hugo.
Style guide
Use sentence case wherever applicable.
Use the numbered lists for items in sequential order and bulletins for the other lists.