NX-8170-G9 System
Specifications
Platform ANY
NX-8170-G9
May 13, 2025
Contents
1. System Specifications.........................................................................3
Node Naming (NX-8170-G9)........................................................................................................3
NX-8170-G9 System Specifications................................................................................... 4
2. Component Specifications................................................................ 11
Controls and LEDs for Single-node Platforms........................................................................... 11
Network Card LED Description................................................................................................ 13
Power Supply Unit Redundancy and Node Configuration..........................................................14
3. Memory Configurations..................................................................... 16
Supported Memory Configurations........................................................................................... 16
4. Nutanix Hardware Naming Convention..............................................19
Copyright..............................................................................................21
License.....................................................................................................................................21
Conventions..............................................................................................................................21
Default Cluster Credentials...................................................................................................... 21
Version..................................................................................................................................... 22
1
SYSTEM SPECIFICATIONS
Node Naming (NX-8170-G9)
Nutanix assigns a name to each node in a block, which varies by product type.
The NX-8170-G9 block contains a single node named Node A.
Drive Configurations
The NX-8170-G9 is supported as either a hyper-converged infrastructure (HCI) node, a Nutanix unified
storage (NUS) node, or a compute-only node with no storage drives. When used as an HCI node, the
NX-8170-G9 supports All-SSD and All-NVMe drive configurations. When used as a Dedicated NUS node,
NX-8170-G9 supports All-SSD, All-NVMe, and NVMe+NVMe Storage Tier (NST) drive configurations.
NX-8170-G9 supports a partial population of drives with All-SSD and All-NVMe configurations only with
even numbers of drives.
Supported drive configuration for All-SSD and All-NVMe:
• Two SSD or NVMe drives in slots 1 and 2, all other slots empty
• Four SSD or NVMe drives in slots 1 through 4, all other slots empty
• Six SSD or NVMe drives in slots 1 through 6, all other slots empty
• Eight SSD or NVMe drives in slots 1 through 8, all other slots empty
• Ten SSD or NVMe drives in slots 1 through 10, all other slots empty
• Twelve SSD or NVMe drives in slots 1 through 12
Figure 1: NX-8170-G9 Front Panel
Platform | System Specifications | 3
Figure 2: NX-8170-G9 Control Panel
Figure 3: NX-8170-G9 Back Panel
NX-8170-G9 System Specifications
Table 1: System Characteristics
Boot Device Boot Drive
• 2 x 512GB M.2 Boot Device
Chassis Chassis
• 1U1N SFF Chassis
Platform | System Specifications | 4
CPU Processor
Dual Intel Xeon® 5th Gen. (Emerald Rapids)
• 2 x Intel Xeon® Silver 4510 [12 cores / 2.40 GHz / 150 W]
• 2 x Intel Xeon® Silver 4514Y [16 cores / 2.00 GHz / 150 W]
• 2 x Intel Xeon® Silver 4516Y+ [24 cores / 2.20 GHz / 185 W]
• 2 x Intel Xeon® Gold 5515+ [8 cores / 3.20 GHz / 165 W]
• 2 x Intel Xeon® Gold 6434 [8 cores / 3.70 GHz / 195 W]
• 2 x Intel Xeon® Gold 6526Y [16 cores / 2.80 GHz / 195 W]
• 2 x Intel Xeon® Gold 6542Y [24 cores / 2.90 GHz / 250 W]
• 2 x Intel Xeon® Gold 5520+ [28 cores / 2.20 GHz / 205 W]
• 2 x Intel Xeon® Gold 6538Y+ [32 cores / 2.20 GHz / 225 W]
• 2 x Intel Xeon® Gold 6548Y+ [32 cores / 2.50 GHz / 250 W]
Dual Intel Xeon® 4th Gen. (Sapphire Rapids)
• 2 x Intel Xeon® Silver 4410T [10 cores / 2.70 GHz / 150 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Silver 4416+ [20 cores / 2.00 GHz / 165 W]
• 2 x Intel Xeon® Gold 5415+ [8 cores / 2.90 GHz / 150 W]
• 2 x Intel Xeon® Gold 5416S [16 cores / 2.00 GHz / 150 W]
• 2 x Intel Xeon® Gold 5418Y [24 cores / 2.00 GHz / 185 W]
• 2 x Intel Xeon® Gold 5420+ [28 cores / 2.00 GHz / 205 W]
• 2 x Intel Xeon® Gold 6416H [18 cores / 2.20 GHz / 165 W]
• 2 x Intel Xeon® Gold 6426Y [16 cores / 2.50 GHz / 185 W]
• 2 x Intel Xeon® Gold 6442Y [24 cores / 2.60 GHz / 225 W]
• 2 x Intel Xeon® Gold 6448H [32 cores / 2.40 GHz / 250 W]
• 2 x Intel Xeon® Gold 6448Y [32 cores / 2.10 GHz / 225 W]
• 2 x Intel Xeon® Gold 6534 [8 cores / 3.90 GHz / 195 W]
Platform | System Specifications | 5
Memory
Note:
• The 96 GB memory configurations are only available with Emerald Rapids
processors.
• The minimum memory configuration required for Energy STAR 4.0
certification is 512 GB. If memory is less than 512 GB, the platform is not
Energy Star certified, even if the CPU is certified.
• When configuring 128 GB DIMMs, you cannot configure the 100GbE NIC
option.
32GB DIMM
• 4 x 32 GB = 128 GB
• 8 x 32 GB = 256 GB
• 12 x 32 GB = 384 GB
• 16 x 32 GB = 512 GB
• 24 x 32 GB = 768 GB
• 32 x 32 GB = 1024 GB
64GB DIMM
• 4 x 64 GB = 256 GB
• 8 x 64 GB = 512 GB
• 12 x 64 GB = 768 GB
• 16 x 64 GB = 1024 GB
• 24 x 64 GB = 1536 GB
• 32 x 64 GB = 2048 GB
96GB DIMM
• 16 x 96 GB = 1536 GB
• 24 x 96 GB = 2304 GB
• 32 x 96 GB = 3072 GB
128GB DIMM
• 4 x 128 GB = 512 GB
• 8 x 128 GB = 1024 GB
• 12 x 128 GB = 1536 GB
• 16 x 128 GB = 2048 GB
• 24 x 128 GB = 3072 GB
• 32 x 128 GB = 4096 GB
Platform | System Specifications | 6
Network
Note:
• The NIC 3 option is only available on All-flash NVMe configuration. Max
Qty 2 on Quad-port 25GbE and Dual-port 100GbE NIC Option.
• When configuring 128 GB DIMMs, you cannot configure the 100GbE NIC
option.
Ports on Serverboard
• 1x 1GbE Dedicated IPMI
AIOM
• 1 x 2P 10GBase-T (Port 1 is shared IPMI)
• 1 x 2P 10GBase-T + 2P SFP+ (Port 1 is shared IPMI)
Add on NICs in PCIe slots
• 0, 1, 2 or 3 x 10GbE 4P NIC
• 0, 1, 2 or 3 x 10GBaseT 2P NIC
• 0, 1, 2 or 3 x 10GBaseT 4P NIC
• 0, 1, 2 or 3 x 25GbE 2P NIC
Add on NICs in PCIe slots
• 0, 1 or 2 x 25GbE 4P NIC
• 0, 1 or 2 x 100GbE 2P NIC
Network Cables Network Cable
• 1M-QSFP28
• 1M-QSFP+
• 1M-SFP28
• 3M-QSFP28
• 3M-SFP28
• 3M-SFP+
• 5M-QSFP28
• 5M-QSFP+
• 5M-SFP28
• 5M-SFP+
Power Cable Power Cable
• C13/14 4ft Power Cable
Platform | System Specifications | 7
Power Supply Power Supply
• 2 x PWS, 2000W
Server Server
• 1 x NX-8170-G9 Server
Storage
All NVMe 2, 4, 6, 8, 10 or 12 x NVMe
• 1.92TB
• 3.84TB
• 7.68TB
• 15.36TB
All SSD 2, 4, 6, 8, 10 or 12 x SSD
• 1.92TB
• 3.84TB
• 7.68TB
All SSD SED 2, 4, 6, 8, 10 or 12 x SSD
• 3.84TB
• 7.68TB
NVMe+NST 4 x NVMe
• 15.36TB
8 x NVMe Storage Tier
• 30TB
Storage Storage Controller
Controller
• 1 x Broadcom 3816 Host Bus Controller (HBA), required for SATA/SAS based drive
support
TPM TPM
• 0 or 1 x Unprovisioned Trusted Platform Module
Transceiver Transceiver
• SR SFP+ Transceiver
VGA 1 x VGA connector per node (15-pin female)
Chassis fans 8x 40 mm counter- rotating fan
Platform | System Specifications | 8
Table 2: Block, power and electrical
Block
• Width : 438.4 mm
• Height : 43.6 mm
• Depth : 745.7 mm
Maximum configuration (without rails and accessory kits)
• Weight : 19 kg
Package Package maximum weight with rails and accessory kits
• Weight : 31.7 kg
Shock 10ms, half-sine, one shock on each side
• Operating : 5 G
10ms, square wave, one shock on each side
• Non-Operating : 20 G
Thermal
Dissipation • Typical : 3847 BTU/hr
• Maximum : 5129 BTU/hr
Vibration 5 to 200 Hz., approx. 30 min./axis
(Random)
• Non-Operating : 0.98 Grms
5 to 500 Hz., approx. 15 min./axis
• Operating : 0.21 Grms
Power
Note:
consumption
(calculated)
• For the power consumption calculations, Max NIC and disk configurations
are considered.
Max Config
• Maximum: 1367 VA
• Typical: 957 VA
Operating Operating temperature : 10-30C
environment
Non-Operating temperature : -40-70C
Operating relative humidity : 20-85%
Non-operating relative humidity : 5-95%
Platform | System Specifications | 9
Certifications
• BIS
• BSMI
• CE
• CSA
• CSAus
• EAC
• Energy Star
• FCC
• ICES
• KCC
• RCM
• Reach
• RoHS
• S-MARK
• SABS
• SII
• UKCA
• UKRSEPRO
• UL
• VCCI-A
• WEEE
• cUL
2
COMPONENT SPECIFICATIONS
Controls and LEDs for Single-node Platforms
Figure 4: Controls and LEDs for NX-8170-G9
Table 3: Front Panel Controls and Indicators
LED or Button Function
Power button Power button applies or removes primary power
from the power supply to the system (System
standby power maintains)
Power LED Indicates power is being supplied to the system
power supply units. This LED is illuminated when
the system is operating normally
Drive LED Not used
Port 1 LED (NIC) Indicates network activity on AIOM 10base-T port 1
Multiple Function i LED Solid blue: UID activated locally
Blinking blue: UID activated remotely (Using IPMI)
Solid red: CPU processor overheat detected
Flashing red at 1Hz: Fan failure
Unit Identifier button (UID) Illuminates LED on the front and rear of the chassis
for system identification. The LED will remain ON
until the button is pushed a second time.
Table 4: Drive LEDs
Top LED: Activity Blue or green, blinking = I/O activity, off = idle
Platform | Component Specifications | 11
Bottom LED: Status Red solid = failed drive, On five seconds after boot
= power on
Figure 5: Back Panel LEDs and Indicators
Table 5: Back Panel LEDs and Indicators
Name Color Function
IPMI, Link LED (Left) Solid Green 100 Mbps
Solid Amber 1 Gbps
IPMI, Activity LED (Right) Blinking Amber Active
AIOM, Link LED (Left) Off No link
Green Linked at 10 Gb/s
Amber Linked at 1 Gb/s
AIOM, Activity LED (Right) Off No activity
Blinking Green Link up (traffic flowing)
Locator LED (UID) Blue Blinking - Node identified
Table 6: Power Supply LED Indicators
Power supply condition LED status
No AC power to all power supplies Off
Power supply critical events that cause a shutdown: Failure, Over Steady amber
Current Protection, Over Voltage Protection, Fan Fail, Over
Temperature Protection, Under Voltage Protection.
Power supply warning events. Power supply continues to operate. High Blinking amber (1 Hz )
temperature, over voltage, under voltage and other conditions.
When AC is present only: 12VSB on (PS off) or PS in sleep state Blinking green (1 Hz) minute
Output on and OK Steady green
AC cord unplugged Steady amber
Power supply firmware updating mode Blinking green (2 Hz)
Platform | Component Specifications | 12
For LED states for add-on NICs, see Network Card LED Description on page 13.
Network Card LED Description
Different NIC manufacturers use different LED colors and blink states. Not all NICs are supported for every
Nutanix platform. See the system specifications for your platform to verify which NICs are supported.
Table 7: SuperMicro NICs
NIC Link (LNK) LED Activity (ACT) LED
Dual/Quad Port 25GbE Green: 25 GbE Blinking: Activity
Amber: Less than 25 GbE Off: No activity
Off: No link
Dual Port 100GbE Green: 100 GbE Blinking: Activity
Amber: Less than 100 GbE Off: No activity
Table 8: Intel NICs
NIC Link (LNK) LED Activity (ACT) LED
Quad port 10G BaseT Green: 10 Gbps Blinking green: Transmitting or
receiving data
Yellow: 5/2.5/1Gbps
Off: No link
Off: 100 Mbps
Quad port 10G SFP+ Green: 10 Gbps Blinking: Activity
Yellow: 1 Gbps Off: No activity
Dual Port 25G Green: 10 Gbps Green: SFP LAN port active
Amber: 10 Gbps
Table 9: Broadcom NICs
NIC Link (LNK) LED Activity (ACT) LED
Dual Port 25G Green: Linked at 25 Gb/s Blinking green: Traffic flowing
activity
Yellow: Linked at 10 Gb/s or 1 Gb/
s Off: No activity
Off: No link
Dual Port 10G Green: Linked at 10 Gb/s Blinking green: Link up (traffic
flowing)
Amber: Linked at 1 Gb/s
Off: No activity
Off: No link
Platform | Component Specifications | 13
Table 10: Mellanox NICs
NIC Bi-color LED (Yellow/ Single color LED Indicates
Green) (Green)
Dual port CX-6 25G 1 Hz blinking yellow Off Beacon command for
locating the adapter card
Dual port CX-6 100G
4 Hz blinking yellow On Error with the link
Blinking green Blinking Physical activity
Solid green On Link up
Power Supply Unit Redundancy and Node Configuration
Note: Carefully plan your AC power source needs, especially in cases where the cluster consists of mixed
models. Nutanix recommends that you use a 208 V - 240 V AC power source to ensure Power Supply Unit
(PSU) redundancy.
Table 11: PSU Redundancy and Node Configuration
Nutanix model Number of nodes Redundancy at 110 V Redundancy at 208-240
V
NX-1065-G9 1, 2, or 3 No Yes
NX-3035-G9 1 or 2 No Yes
NX-3060-G9 1, 2, 3, or 4 No Yes
NX-1150S-G9 1 Yes Yes
NX-1175S-G9 1 Yes Yes
NX-3155-G9 1 No Yes
NX-8150-G9 1 No Yes
NX-8155-G9 1 No Yes
NX-8155A-G9 1 No Yes
NX-8170-G9 1 No Yes
NX-9151-G9 1 Not supported Yes (2+1 PSU
redundancy)
Two PSUs must remain
functional at all times.
Caution:
For all G9 platforms except NX-1175S-G9 and NX-1150S-G9: When the input power source is 110
V, a single PSU failure will cause all nodes in a block to power off.
Platform | Component Specifications | 14
For NX-1175S-G9 and NX-1150S-G9: The block can tolerate a single PSU failure when
connected to a 110 V input power source.
Platform | Component Specifications | 15
3
MEMORY CONFIGURATIONS
Supported Memory Configurations
DIMM installation information for all Nutanix G9 Hyper platforms.
DIMM Restrictions
DIMM capacity
Each G9 node must contain only DIMMs of the same capacity. For example, you cannot mix 32 GB
DIMMs and 64 GB DIMMs in the same node.
DIMM speed
G9 platforms that use Intel Sapphire Rapids processors support 4800 MT/s DIMMs. The speed of
the CPU / DIMM interface is 4000 MT/s or 4800 MT/s based on the CPU SKU used.
G9 platforms that use Intel Emerald Rapids processors ship with 5600 MT/s DIMMs. The speed of
the CPU / DIMM interface depends on the CPU class and on whether you have installed one DIMM
per memory channel (1DPC) or two DIMMs per memory channel (2DPC).
• Platinum-8xxx: max 5600 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Gold-6xxx: max 5200 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Gold-5xxx: max 4800 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Silver-4xxx: max 4400 MT/s at both 1DPC and 2DPC.
• Bronze-3xxx (single socket boards only): max 4400 MT/s at both 1DPC and 2DPC.
128 GB 5600 MT/s DIMMs require BIOS Ex30.001 or later. 128 GB 5600 MT/s DIMMs cannot mix
in the same node with 4800 MT/s DIMMs.
If you install a 5600 MT/s DIMM in a G9 platform that uses Intel Sapphire Rapids processors, it runs
at a max of 4800 MT/s.
DIMM manufacturer
Mixing of DIMMs from different manufacturers in a node is not allowed. You must use DIMMs from
the same manufacturer in a node.
Memory Installation Order for Single-Node Hyper G9 Platforms
A memory channel is a group of DIMM slots.
For G9 single-node hyper platforms, each CPU is associated with eight memory channels. Each memory
channel contains two DIMM slots, one blue slot and one black slot, for a total of 32 DIMM slots.
Platform | Memory Configurations | 16
Figure 6: DIMM Slots for a Single-Node Hyper G9 Serverboard
Table 12: DIMM Installation Order for Single-Node Hyper G9 Platforms
Number of Slots to Use Supported Capacities
DIMMs
4 CPU1: A1, G1 (blue slots) 32 GB, 64 GB, 128 GB
CPU2: A1, G1 (blue slots)
Platform | Memory Configurations | 17
Number of Slots to Use Supported Capacities
DIMMs
8 CPU1: A1, C1, E1, G1 (blue slots) 32 GB, 64 GB, 128 GB
CPU2: A1, C1, E1, G1 (blue slots)
12 CPU1: A1, C1, D1, E1, F1, G1 (blue slots) 32 GB, 64 GB, 128 GB
CPU2: A1, C1, D1, E1, F1, G1 (blue slots)
16 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB, 128 GB
slots)
CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue
slots)
24 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB, 128 GB
slots)
CPU1: A2, C2, E2, G2 (black slots)
CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue
slots)
CPU2: A2, C2, E2, G2 (black slots)
32 Fill all slots. 32 GB, 64 GB, 128 GB
4
NUTANIX HARDWARE NAMING
CONVENTION
Every Nutanix block has a unique name based on the standard Nutanix naming convention.
The Nutanix hardware model name uses the format prefix-body-suffix.
For all Nutanix platforms, the prefix is NX to indicate that the platform is sold directly by Nutanix and
support calls are handled by Nutanix.
body uses the format ABCD | Y. The following table describes the body values.
Figure 7: Nutanix Hardware Naming Convention
Table 13: Body
Body Description
A Indicates the product series and is one of the following values.
• 1 – Entry-level and ROBO
• 3 – Balanced compute and storage
• 8 – High performance
• 9 – Accelerated system
Platform | Nutanix Hardware Naming Convention | 19
Body Description
B Indicates the number of nodes.
• For single-node platforms, B is always 1.
• For multi-node platforms, B can be 1, 2, 3, or 4.
Note: For multi-node platforms, the documentation always
uses a generic zero for B.
C Indicates the chassis form factor and is one of the following
values.
• 1 – 1U1N (one rack unit high with one node)
• 3 – 2U2N (two rack units high with two nodes)
• 5 – 2U1N (two rack units high with one node)
• 6 – 2U4N (two rack units high with four nodes)
• 7 – 1U1N (one rack unit high with one node)
D Indicates the drive form factor and is one of the following values.
• 0 – 2.5 inch drives
• 1 – E1.S drives
• 3 – E3.S drives
• 5 – 3.5 inch drives
Y Indicates platform types, and takes one of the following values:
• S – Single socket
• G – GPU (Not used in G9 since GPU is available on multiple
models)
• A – AMD
Table 14: Suffix
Suffix Description
G5 The platform uses the Intel Broadwell CPU.
G6 The platform uses the Intel Skylake CPU.
G7 The platform uses the Intel Cascade Lake CPU.
G8 or N-G8 The platform uses the Intel Ice Lake CPU.
G9 The platform uses the Intel Sapphire Rapids, Emerald Rapids or
AMD Genoa CPU.
COPYRIGHT
Copyright 2025 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
variable_value The action depends on a value that is unique to your environment.
ncli> command The commands are executed in the Nutanix nCLI.
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
output The information is displayed as output from a command or in a log file.
Default Cluster Credentials
Interface Target Username Password
Nutanix web console Nutanix Controller VM admin Nutanix/4u
vSphere Web Client ESXi host root nutanix/4u
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console AHV host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
SSH client Nutanix Controller VM admin Nutanix/4u
Platform | Copyright | 21
Interface Target Username Password
SSH client or console Acropolis OpenStack root admin
Services VM (Nutanix
OVM)
Version
Last modified: May 13, 2025 (2025-05-13T10:38:11+05:30)