0% found this document useful (0 votes)
156 views49 pages

System Specs G7 Single Node

Uploaded by

tmdwns1122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views49 pages

System Specs G7 Single Node

Uploaded by

tmdwns1122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Platform ANY

System Specifications for


Single-Node G7 Platforms
NX-8150-G7/NX-8155-G7

December 11, 2019


Contents

Visio Stencils..................................................................................................................iii

1. System Specifications............................................................................................ 4
Node Naming (NX-3155G-G7).............................................................................................................................. 4
NX-3155G-G7 System Specifications......................................................................................................7
NX-3155G-G7 GPU Specifications...........................................................................................................11
Field-Replaceable Unit List (NX-3155G-G7)...................................................................................... 12
Node Naming (NX-8150-G7)................................................................................................................................15
NX-8150-G7 System Specifications.......................................................................................................19
Field-Replaceable Unit List (NX-8150-G7)........................................................................................23
Node Naming (NX-8155-G7)............................................................................................................................... 25
NX-8155-G7 System Specifications......................................................................................................30
Field-Replaceable Unit List (NX-8155-G7)........................................................................................ 35

2. Component Specifications................................................................................ 38
Controls and LEDs for Single-node G7 Platforms..................................................................................... 38
LED Meanings for Network Cards....................................................................................................................39
Power Supply Unit (PSU) Redundancy and Node Configuration (G7 Platforms)......................... 40
Nutanix DMI Information (G7 Platforms)....................................................................................................... 41
Block Connection in a Customer Environment...........................................................................................42
Connecting the Nutanix Block.............................................................................................................. 43

3. Memory Configurations......................................................................................45
Supported Memory Configurations (G7 Platforms).................................................................................. 45

Copyright...................................................................................................................... 48
License......................................................................................................................................................................... 48
Conventions............................................................................................................................................................... 48
Default Cluster Credentials................................................................................................................................. 48
Version......................................................................................................................................................................... 49
VISIO STENCILS
Visio stencils for Nutanix products are available on VisioCafe.

Platform | Visio Stencils | iii


1
SYSTEM SPECIFICATIONS
Node Naming (NX-3155G-G7)
Nutanix assigns a name to each node in a block, which varies by product type.
The NX-3155G-G7 block contains a single node named Node A.

Figure 1: NX-3155G-G7 3.5-inch drive order (hybrid configuration)

Table 1: Supported drive configurations

Hybrid SSD and HDD Two SSDs and four HDDs, with six empty drive slots. The
two SSDs contain the controller VM and metadata, while
the four HDDs are data-only.

All-flash Two, four, or six SSDs, with six empty drive slots.

Platform | System Specifications | 4


Figure 2: NX-3155G-G7 control panel

Figure 3: NX-3155G-G7 Back panel

The NX-3155G-G7 supports one, two, or three NICs. The supported NIC options follow.

• Quad-port 10 GbE SFP+


• Dual-port 10 GbE SFP+
• Dual-port 10GBase-T
• Dual-port 25 GbE SFP28
• Dual-port 40GbE QSFP+

Platform | System Specifications | 5


Figure 4: NIC options for the NX-3155G-G7

The NX-3155G-G7 supports the following GPU cards. You cannot mix GPU types in the same
chassis.

Table 2: Supported GPU configurations

GPU card Supported configurations

NVIDIA Tesla M10 One or two

NVIDIA Tesla V100 One or two

NVIDIA Tesla T4 One, two, three, or four

Note: Power consumption is higher when GPU cards are present.

CAUTION: Do not use the NVIDIA Tesla M10 GPU card on hypervisors running more than 1TB of
total memory, due to NVIDIA M-series architectural limitations. The T4 and V100 GPU cards are
not subject to this limitation.

Platform | System Specifications | 6


Figure 5: Riser card slots

Figure 6: NX-3155G-G7 dimensions

NX-3155G-G7 System Specifications

Table 3: System Characteristics

Nodes 1 x nodes per block

Platform | System Specifications | 7


CPU
• 2 x Intel Xeon Platinum_8268, 24-core Cascade Lake @ 2.9 GHz (48 cores
per node)
• 2 x Intel Xeon Gold_6248, 20-core Cascade Lake @ 2.5 GHz (40 cores per
node)
• 2 x Intel Xeon Gold_6254, 18-core Cascade Lake @ 3.1 GHz (36 cores per
node)
• 2 x Intel Xeon Gold_6240, 18-core Cascade Lake @ 2.6 GHz (36 cores per
node)
• 2 x Intel Xeon Gold_6244, 8-core Cascade Lake @ 3.6 GHz (16 cores per
node)

Memory
• DDR4-2933, 1.2V, 32 GB, RDIMM

Note: Each node must contain only DIMMs of the same type, speed, and
capacity.

6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
24 x 32 GB = 768 GB
• DDR4-2933, 1.2V, 64 GB, RDIMM

Note: Each node must contain only DIMMs of the same type, speed, and
capacity.

12 x 64 GB = 768 GB
16 x 64 GB = 1 TB
24 x 64 GB = 1.5 TB

Storage: Hybrid Carriers: 3.5-inch carriers


2 x SSD

• 1.92 TB
• 3.84 TB
• 7.68 TB
4 x HDD

• 6 TB
• 8 TB
• 12 TB

Platform | System Specifications | 8


Storage: Hybrid Carriers: 3.5-inch carriers
(SED)
2 x SSD

• 1.92 TB
• 3.84 TB
4 x HDD

• 6 TB
• 8 TB

Storage: All- Carriers: 3.5-inch carriers


flash
2 4 6 x SSD

• 1.92 TB
• 3.84 TB
• 7.68 TB

Storage: All- Carriers: 3.5-inch carriers


flash (SED)
2 4 6 x SSD

• 1.92 TB
• 3.84 TB

Hypervisor 2 x RAID M.2 Device


Boot Drive
• 240 GB

Network PCIe slot 1, 2, or 3

• Dual-port 10 GbE SFP+ NIC


• Quad-port 10 GbE SFP+ NIC
• Dual-port 10 GBase-T NIC
• Dual-port 25 GbE SFP28 NIC
• Dual-port 40 GbE QSFP+ NIC
Serverboard

• 2 x 10 GbE BASE-T RJ45

USB 2 x USB 3.0 on the serverboard

VGA 1 x VGA connector per node (15-pin female)

Expansion slot 4 x (x8) PCIe 3.0 (low-profile) per node

Chassis fans 4 x fans per block

Platform | System Specifications | 9


Table 4: System Characteristics

Form factor 2 RU rack-mount chassis

Block Weight: 23.68 kg (52.2 lb.)


(standalone)
Depth: 788 mm (31.02 in.)
Width: 483 mm (19.02 in.)
Height: 89 mm (3.5 in.)

Block (package Weight: 35.25 kg (77.7 lb.)


with rack
rails and
accessories)

Rack rail length Minimum: 650.25 mm (25.6 in.)

Maximum: 839.5 mm (33.05 in.)

Table 5: Block power and electrical

Power supplies 1600-watt output @ 100-240Vac, 8 - 13Amp, 50-60Hz

Power Maximum: 1400 W


consumption
Typical: 980 W
Block with 2 x 28-core CPU, 2.1 GHz, 24 x 64GB DIMMs

Thermal Maximum: 4777 BTU/hr


dissipation
Typical: 3344 BTU/hr
Block with 2 x 28-core CPU, 2.1 GHz, 24 x 64GB DIMMs

Table 6: Operating environment

Operating 10-30C
temperature

Non-operating -40-70C
temperature

Operating 20-95% (non-condensing)


relative
humidity

Non-operating 5-95% (non-condensing)


relative
humidity

Certifications

• Energy Star

Platform | System Specifications | 10


• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS

NX-3155G-G7 GPU Specifications


The NX-3155G-G7 supports GPU cards. When a GPU card is present, the maximum operating
temperature is 30° C.

Table 7: Minimum Firmware and Software Versions When Using a GPU Card

Firmware or Software NVIDIA Tesla M10 NVIDIA Tesla V100 NVIDIA Tesla T4

BIOS PU41.002 PU41.002 PU41.002

BMC 7.0 7.0 7.0

Hypervisor
• AHV • AHV • AHV
• ESXi 6.0 • ESXi 6.0 • ESXi 6.0
• ESXi 6.5 • ESXi 6.5 • ESXi 6.5

CAUTION: Do
not use an M10
GPU card on
hypervisors
running more
than 1TB of
total memory.

Foundation 4.4.3 4.4.3 4.4.3

AOS 5.10.6 5.10.6 5.10.6

Platform | System Specifications | 11


Firmware or Software NVIDIA Tesla M10 NVIDIA Tesla V100 NVIDIA Tesla T4

NCC 3.9.0 3.9.0 3.9.0

NVIDIA vGPU driver GRID 5.0 GRID 5.0 GRID 6.0

Field-Replaceable Unit List (NX-3155G-G7)


Image Description

Bezel, 2U, with lock and key

Chassis, 2U with CPUs and fans.

M.2 device, 240 GB (hypervisor boot drive)

Drives in 3.5-inch carriers

SSD, 1.92 TB, SAS/SATA

SSD, 3.84 TB, SAS/SATA

SSD, 7.68 TB, SAS/SATA

HDD, SATA, 6 TB

HDD, SATA, 8 TB

HDD, SATA, 12 TB

Encrypted drive: SED SSD, SAS, 1.92 TB

Encrypted drive: SED SSD, SAS, 3.84 TB

Encrypted drive: SED HDD, SAS, 6 TB

Encrypted drive: SED HDD, SAS, 8 TB

Memory, 32GB, DDR4-2933, ECC, RDIMM

Memory, 64GB, DDR4-2933, ECC, LRDIMM

Note: Each node must contain


only DIMMs of the same type,
speed, and capacity.

Platform | System Specifications | 12


Image Description

HBA, LSI Logic 3008 controller

System Packaging, SM (shipping carton)

Power Supply, 1600W

Fan, 80 mm × 80 mm × 38 mm, 13,500 rpm

Rail, 2U

Cable, 1m, SFP+ to SFP+

Cable, 3m, SFP+ to SFP+

Cable, 5m, SFP+ to SFP+

Transceiver, Ethernet, SR, SFP+

NIC, dual-port 10 GbE SFP+

NIC, quad-port 10 GbE SFP+

Platform | System Specifications | 13


Image Description

NIC, dual-port 10GBASE-T

NIC, dual-port 25 GbE SFP28

NIC, dual-port 40 GbE QSFP+

NVIDIA Tesla M10 GPU card

Platform | System Specifications | 14


Image Description

NVIDIA Tesla V100 GPU card (16GB or 32GB version)

NVIDIA Tesla T4 GPU card

Note: The T4 card does not use a power cable.

Cable, 12V GPU Power Cable (for NVIDIA M10 cards)

Cable, 12V GPU Power Cable (for NVIDIA V100 cards)

Node Naming (NX-8150-G7)


Nutanix assigns a name to each node in a block, which varies by product type.
The NX-8150-G7 block contains a single node named Node A.
The NX-8150-G7 supports solid-state drives (SSDs) and non-volatile memory express (NVMe)
drives. The SSDs always contain the Controller VM and metadata. NVMe drives, if used, must
always be placed in the last four drive slots on the right.
The NX-8150-G7 supports the following drive configurations:

Platform | System Specifications | 15


Table 8: Drive configurations

All-SSD 8, 12, 16, 20, or 24 SSDs. Populate the drive


slots from left to right in groups of four.
SSD with NVMe 8, 12, 16, or 20 SSDs plus 4 NVMe. Populate
the drive slots from left to right in groups of
four. The NVMe drives must always be placed
in the last four drive slots on the right, slots 21
through 24.

Figure 7: All-SSD drive order

Figure 8: SSD with NVMe drive order

Figure 9: Control panel

Platform | System Specifications | 16


Figure 10: NX-8150-G7 Back panel

You can install one to three NICs. All installed NICs must be identical. Always populate the NIC
slots in order: NIC1, NIC2, NIC3.

• Dual-port 10 GbE SFP+


• Dual-port 10GBase-T
• Dual-port 25 GbE ConnectX-4 SFP28
• Dual-port 40 GbE ConnectX-4 QSFP+

Figure 11: NIC options

Platform | System Specifications | 17


Figure 12: Exploded diagram

Platform | System Specifications | 18


Figure 13: Dimensions

NX-8150-G7 System Specifications

Table 9: System Characteristics

Nodes 1 x nodes per block

CPU
• 2 x Intel Xeon Platinum_8280M, 28-core Cascade Lake @ 2.7 GHz (56
cores per node)
• 2 x Intel Xeon Platinum_8280, 28-core Cascade Lake @ 2.7 GHz (56 cores
per node)
• 2 x Intel Xeon Platinum_8268, 24-core Cascade Lake @ 2.9 GHz (48 cores
per node)
• 2 x Intel Xeon Platinum_8260M, 24-core Cascade Lake @ 2.4 GHz (48
cores per node)
• 2 x Intel Xeon Gold_6254, 18-core Cascade Lake @ 3.1 GHz (36 cores per
node)
• 2 x Intel Xeon Gold_6242, 16-core Cascade Lake @ 2.8 GHz (32 cores per
node)
• 2 x Intel Xeon Gold_6244, 8-core Cascade Lake @ 3.6 GHz (16 cores per
node)

Platform | System Specifications | 19


Memory
• DDR4-2933, 1.2V, 128 GB, RDIMM

Note: 128GB DIMMs are supported only with M-type CPUs, such as
the 8280M. 128GB DIMMs are not supported with hybrid SSD and HDD
configurations. Each node must contain only DIMMs of the same type,
speed, and capacity.

12 x 128 GB = 1.5 TB
16 x 128 GB = 2 TB
24 x 128 GB = 3 TB
• DDR4-2933, 1.2V, 64 GB, RDIMM

Note: Each node must contain only DIMMs of the same type, speed, and
capacity.

6 x 64 GB = 384 GB
8 x 64 GB = 512 GB
12 x 64 GB = 768 GB
16 x 64 GB = 1 TB
24 x 64 GB = 1.5 TB
• DDR4-2933, 1.2V, 32 GB, RDIMM

Note: Each node must contain only DIMMs of the same type, speed, and
capacity.

4 x 32 GB = 128 GB
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
24 x 32 GB = 768 GB

Storage: All- Carriers: 2.5-inch carriers


flash
8, 12, 16, 20, or 24 x SSD

• 1.92 TB
• 3.84 TB
• 7.68 TB

Note: A platform can be partially populated with SSDs in groups of four.

Storage: All- Carriers: 2.5-inch carriers


flash (SED)
8, 12, 16, 20, or 24 x SSD

• 1.92 TB
• 3.84 TB

Platform | System Specifications | 20


Storage: SSD Carriers: 2.5-inch carriers
with NVMe
8, 12, 16, or 20 x SSD

• 1.92 TB
• 3.84 TB
4 x NVMe

• 2 TB
• 4 TB

Hypervisor 2 x RAID M.2 Device


Boot Drive
• 240 GB

Network PCIe slot 1

• Dual-port 10 GbE SFP+ NIC


• Dual-port 10 GBASE-T NIC
• Dual-port 25 GbE SFP28 NIC
• Dual-port 40 GbE QSFP+ NIC
Serverboard

• 2 x 10 GBase-T ports (Port 1 is IPMI failover)


• 1 x 10/100/1000 GBASE-T (IPMI port)

USB 2 x USB 3.0 on the serverboard

VGA 1 x VGA connector per node (15-pin female)

Expansion slot 3 x PCIe 3.0 (low-profile) per node (1 @ x8, 2 @ x16)

Chassis fans 4 x fans per block

Table 10: System Characteristics

Form factor 2 RU rack-mount chassis

Block Weight: 20.9 kg (46.1 lb.)


(standalone)
Depth: 780 mm (30.71 in.)
Width: 483 mm (19.02 in.)
Height: 90 mm (3.54 in.)

Block (package Weight: 32.51 kg (71.7 lb.)


with rack
rails and
accessories)

Platform | System Specifications | 21


Rack rail length Minimum: 673 mm (26.5 in.)

Maximum: 925 mm (36.42 in.)

Table 11: Block power and electrical

Power supplies 1600-watt output @ 100-240Vac, 8 - 13Amp, 50-60Hz

Power Maximum: 1206 W


consumption
Typical: 784 W
Block with 2 x 28-core CPU, 2.7 GHz, 24 x 128GB DIMMs

Thermal Maximum: 4116 BTU/hr


dissipation
Typical: 2676 BTU/hr
Block with 2 x 28-core CPU, 2.7 GHz, 24 x 128GB DIMMs

Table 12: Operating environment

Operating 10-35C
temperature

Non-operating -40-70C
temperature

Operating 20-95% (non-condensing)


relative
humidity

Non-operating 5-95% (non-condensing)


relative
humidity

Certifications

• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI

Platform | System Specifications | 22


• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS

Field-Replaceable Unit List (NX-8150-G7)


Image Description

Bezel, 2U, w/Lock and Key, SM

Chassis, 2U, with fans.

M.2 device (Hypervisor Boot Drive) (240GB, 22x80 mm)

Drives in 2.5-inch carriers

SSD, 1.92 TB, SAS/SATA

SSD, 3.84 TB, SAS/SATA

SSD, 7.68 TB, SAS/SATA

Encrypted SSD, SED-FIPS, 1.92 TB

Encrypted SSD, SED-FIPS, 3.84 TB

NVMe, 2 TB

NVMe, 4 TB

Platform | System Specifications | 23


Image Description

Memory, 32GB, DDR4-2933, 1.2V, ECC, RDIMM

Memory, 64GB, DDR4-2933, 1.2V, ECC, RDIMM

Memory, 128GB, DDR4-2933, 1.2V, ECC, RDIMM


Note: Each node must contain Note: 128GB DIMMs are supported only with M-type
only DIMMs of the same type CPUs, such as the 8280M.
and capacity. (Adding higher-
speed DIMMs is supported,
but the DIMMs will run only at
the supported speed of your
platform.)

HBA, LSI Logic 3008 controller

System Packaging, SM, NX-8150-G7 series, (shipping


carton) Package dimensions: 38” x 25.5” x 12” (96.52 x
64.77 x 30.48 cm)

Power Supply, 1600W

Fan, 80 mm × 80 mm × 38 mm

Rail

Cable, 1m, SFP+ to SFP+

Cable, 3m, SFP+ to SFP+

Cable, 5m, SFP+ to SFP+

NIC, Dual-port 10 GbE, SFP+

NIC, Dual-port 10 GBase-T

Platform | System Specifications | 24


Image Description

NIC, Dual-port CX4 25 GbE, SFP28

NIC, Dual-port CX4 40 GbE, QSFP+

Node Naming (NX-8155-G7)


Nutanix assigns a name to each node in a block, which varies by product type.
The NX-8155-G7 requires a minimum of Foundation 4.4.2. Update all nodes in the cluster to
Foundation 4.4.2 or later.
The NX-8155-G7 block contains a single node named Node A.
The NX-8155-G7 has twelve drive slots. The first two SSDs contain the Controller VM and
metadata. All HDDs (if used) are data-only drives.

Figure 14: NX-8155-G7 front panel (NVMe configuration shown)

Platform | System Specifications | 25


Partial Population
The NX-8155-G7 supports partially populating the chassis with fewer than the maximum
number of drives, according to the following restrictions:

• Partially populated configurations support only even numbers of drives.


• SSD with NVMe configurations must include a minimum of four SSDs.
• Populate the drive slots in order from bottom to top and left to right.
When partially populating a node, Nutanix recommends that you put an empty drive carrier in
each unpopulated drive slot, for air flow and to prevent dust contamination.

Drive Configurations

Note: If you order a hybrid SSD and HDD configuration, you can later convert the platform to
an all-SSD configuration. However, if you begin with either a hybrid SSD and HDD or an all-SSD
configuration, you cannot later convert to an SSD with NVMe configuration.

Certain capacities of HDDs can only mix with certain capacities of SSDs.

Table 13: NX-8155-G7 Hybrid SSD and HDD Drive Capacities

HDD capacity Can mix with SSD capacity

6 TB or 8 TB 1.92 TB or 3.84 TB

12 TB 3.84 TB

6 TB or 8 TB, encrypted (SED) 1.92 TB or 3.84 TB, encrypted (SED)

Table 14: NX-8155-G7 Hybrid SSD and HDD Drive Configurations

Supported configurations

2 SSDs in slots 1 and 2; 4 HDDs in slots 3 through 6; all other slots empty

2 SSDs in slots 1 and 2; 6 HDDs in slots 3 through 8; all other slots empty

2 SSDs in slots 1 and 2; 8 HDDs in slots 3 through 10; all other slots empty

2 SSDs in slots 1 and 2; 10 HDDs in slots 3 through 12

4 SSDs in slots 1 through 4; 8 HDDs in slots 5 through 12

Figure 15: Hybrid configuration 1

Platform | System Specifications | 26


Figure 16: Hybrid configuration 2

Table 15: NX-8155-G7 All-SSD Drive Configurations

Supported configurations

2 SSDs in slots 1 and 2; all other slots empty

4 SSDs in slots 1 through 4; all other slots empty

6 SSDs in slots 1 through 6; all other slots empty

8 SSDs in slots 1 through 8; all other slots empty

10 SSDs in slots 1 through 10; all other slots empty

12 SSDs (fill all slots)

Populate the drive slots two by two, in numerical order. For a four-drive partial
configuration, put the drives in slots 1 through 4; for a six-drive partial configuration, put
the drives in slots 1 through 6; and so on.
Figure 17: All-SSD configuration

Table 16: NX-8155-G7 SSD with NVMe Drive Configurations

Supported configurations

4 SSDs in slots 1 through 4; 4 NVMe drives in slots 9 through 12; all other slots empty

6 SSDs in slots 1 through 6; 4 NVMe drives in slots 9 through 12; all other slots empty

8 SSDs in slots 1 through 8; 4 NVMe drives in slots 9 through 12

Put four NVMe drives in slots 9 through 12. No other NVMe configurations are supported.

Platform | System Specifications | 27


When using NVMe drives, you must include a minimum of four SSDs. Populate the drive
slots two by two, in numerical order.
Figure 18: SSD with NVMe configuration

Figure 19: Control panel

Figure 20: NX-8155-G7 Back panel

You can install one, two, or three NICs. All installed NICs must be identical. Always populate the
NIC slots in order: NIC1, NIC2, NIC3.

• Dual-port 10 GbE SFP+ (one to three NICs)


• Dual-port 10GBASE-T (one to three NICs)
• Dual-port 25 GbE SFP+ (one to three NICs)

Platform | System Specifications | 28


• Dual-port 40 GbE SFP+ (one to three NICs)

Figure 21: NIC options

Figure 22: NX-8155-G7 dimensions

Platform | System Specifications | 29


Exploded Diagram

NX-8155-G7 System Specifications

Table 17: System Characteristics

Nodes 1 x nodes per block

Platform | System Specifications | 30


CPU
• 2 x Intel Xeon Platinum_8280, 28-core Cascade Lake @ 2.7 GHz (56 cores
per node)
• 2 x Intel Xeon Platinum_8268, 24-core Cascade Lake @ 2.9 GHz (48 cores
per node)
• 2 x Intel Xeon Gold_6248, 20-core Cascade Lake @ 2.5 GHz (40 cores per
node)
• 2 x Intel Xeon Gold_6254, 18-core Cascade Lake @ 3.1 GHz (36 cores per
node)
• 2 x Intel Xeon Gold_5220, 18-core Cascade Lake @ 2.2 GHz (36 cores per
node)
• 2 x Intel Xeon Gold_6242, 16-core Cascade Lake @ 2.8 GHz (32 cores per
node)
• 2 x Intel Xeon Silver_4214, 12-core Cascade Lake @ 2.2 GHz (24 cores per
node)
• 2 x Intel Xeon Gold_6244, 8-core Cascade Lake @ 3.6 GHz (16 cores per
node)

Memory
• DDR4-2933, 1.2V, 64 GB, RDIMM

Note: Each node must contain only DIMMs of the same type, speed, and
capacity.

4 x 64 GB = 256 GB
6 x 64 GB = 384 GB
8 x 64 GB = 512 GB
12 x 64 GB = 768 GB
16 x 64 GB = 1 TB
24 x 64 GB = 1.5 TB
• DDR4-2933, 1.2V, 32 GB, RDIMM

Note: Each node must contain only DIMMs of the same type, speed, and
capacity.

4 x 32 GB = 128 GB
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
24 x 32 GB = 768 GB

Platform | System Specifications | 31


Storage: Hybrid Carriers: 3.5-inch carriers
2 or 4 x SSD

• 1.92 TB
• 3.84 TB
• 7.68 TB
4, 6, 8, or 10 x HDD

• 6 TB
• 8 TB
• 12 TB

Storage: Hybrid Carriers: 3.5-inch carriers


(SED)
2 or 4 x SSD

• 1.92 TB
• 3.84 TB
4, 6, 8, or 10 x HDD

• 6 TB
• 8 TB

Storage: All- Carriers: 3.5-inch carriers


flash
4, 6, 8, 10, or 12 x SSD

• 1.92 TB
• 3.84 TB
• 7.68 TB

Note: When partially populating a platform, use only even numbers of


drives.

Storage: All- Carriers: 3.5-inch carriers


flash (SED)
4, 6, 8, 10, or 12 x SSD

• 1.92 TB
• 3.84 TB

Platform | System Specifications | 32


Storage: SSD Carriers: 3.5-inch carriers
with NVMe
4, 6, or 8 x SSD

• 1.92 TB
• 3.84 TB
• 7.68 TB
4 x NVMe

• 2 TB
• 4 TB

Hypervisor 2 x RAID M.2 Device


Boot Drive
• 240 GB

Network PCIe slot 1, 2, 3

• Dual-port 10 GbE SFP+ NIC


• Dual-port 10 GBASE-T NIC
• Dual-port 25 GbE SFP+ NIC
• Dual-port 40 GbE SFP+ NIC
Serverboard

• 2 x 10 GBase-T ports (Port 1 is IPMI failover)


• 1 x 10/100/1000 GBASE-T (IPMI port)

USB 2 x USB 3.0 on the serverboard

VGA 1 x VGA connector per node (15-pin female)

Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (both slots filled with NICs)

Chassis fans 4 x fans per block

Table 18: System Characteristics

Form factor 2 RU rack-mount chassis

Block Weight: 25.13 kg (55.4 lb.)


(standalone)
Depth: 788 mm (31.02 in.)
Width: 482.92 mm (19.01 in.)
Height: 89 mm (3.5 in.)

Block (package Weight: 36.7 kg (80.9 lb.)


with rack
rails and
accessories)

Platform | System Specifications | 33


Rack rail length Minimum: 673 mm (26.5 in.)

Maximum: 925 mm (36.42 in.)

Table 19: Block power and electrical

Power supplies 1600-watt output @ 100-240Vac, 8 - 13Amp, 50-60Hz

Power Maximum: 1070 W


consumption
Typical: 700 W
Block with 2 x 28-core CPU, 2.5 GHz, 24 x 64GB DIMMs

Thermal Maximum: 3650 BTU/hr


dissipation
Typical: 2388 BTU/hr
Block with 2 x 28-core CPU, 2.5 GHz, 24 x 64GB DIMMs

Table 20: Operating environment

Operating 10-35C
temperature

Non-operating -40-70C
temperature

Operating 20-95% (non-condensing)


relative
humidity

Non-operating 5-95% (non-condensing)


relative
humidity

Certifications

• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI

Platform | System Specifications | 34


• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS

Field-Replaceable Unit List (NX-8155-G7)


Image Description

Bezel, 2U, w/Lock and Key, SM

Chassis, 2U, with fans.

M.2 drive, 240GB, 22mm × 80mm

Drives in 3.5-inch carriers

SSD, 1.92 TB, SATA


SSD, 3.84 TB, SAS/SATA

HDD, 6 TB, SAS/SATA

HDD, 8 TB, SAS/SATA

HDD, 12 TB, SAS/SATA

Encrypted SSD, SED/SAS, 1.92 TB

Encrypted SSD, SED/SAS, 3.84 TB

Encrypted HDD, SED/SAS, 6 TB

Encrypted HDD, SED/SAS, 8 TB

NVMe, 2 TB

NVMe, 4 TB

Platform | System Specifications | 35


Image Description

Memory, 32GB, DDR4-2933, 1.2V, ECC, RDIMM

Memory, 64GB, DDR4-2933, 1.2V, ECC, RDIMM

Note: Each node must contain


only DIMMs of the same type,
speed, and capacity.

HBA, LSI Logic 3008 controller

System Packaging (shipping carton). Package


dimensions: 38” x 25.5” x 12” (96.52 x 64.77 x 30.48 cm)

Power supply, 1600W

Fan, 80 mm × 80 mm × 38 mm

Rails

Cable, 1m, SFP+ to SFP+

Cable, 3m, SFP+ to SFP+

Cable, 5m, SFP+ to SFP+

NIC, dual-port 10 GbE SFP+

Platform | System Specifications | 36


Image Description

NIC, dual-port 10GBASE-T

NIC, dual-port 25 GbE SFP28

NIC, dual-port 40 GbE QSFP28

Platform | System Specifications | 37


2
COMPONENT SPECIFICATIONS
Controls and LEDs for Single-node G7 Platforms

Table 21: Front Panel Controls and Indicators

LED or Button Function

Power button System on/off (Press and hold 4 seconds to


power off.)
Reset button System reset
Power LED Solid green when block is powered on
Disk Activity LED Flashing green on activity
10 GbE LAN 1 LED (Port 1) Activity = green flashing
10 GbE LAN 2 LED (Port 2) Activity = green flashing
Multiple Function i LED
i Unit identification Solid blue
i Overheat condition Solid red
i Power supply failure Flashing red at 0.25Hz
i Fan failure Flashing red at 1Hz

Unit Identifier button (UID) Press button to illuminate the i LED (blue)

Table 22: Drive LEDs

Top LED: Activity Blue or green, blinking = I/O activity, off = idle
Bottom LED: Status Red solid = failed drive, On five seconds after
boot = power on

Table 23: Back Panel Controls and Indicators

LED Function

PSU1 and PSU2 LED Green = OK (PSU normal)


Yellow = no node power or PSU is not
inserted completely

Platform | Component Specifications | 38


LED Function
Red for Fail
10 GbE LAN 1 LED (Port 1) Activity = green flashing, Link = solid green
10 GbE LAN 2 LED (Port 2) Activity = green flashing, Link = solid green
1 GbE dedicated IPMI Left LED green for 100 Mbps, Left amber for 1
Gbps, Right yellow flashing for activity
i Unit identification LED Solid blue
2 x 10 GbE and 4 x 10 GbE NIC LEDs Link and Activity = green flashing 10 Gbps,
Amber = 1 Gbps

For LED states for add-on NICs, see LED Meanings for Network Cards on page 39.

LED Meanings for Network Cards


Descriptions of LEDs for supported NICs.
Different NIC manufacturers use different LED colors and blink states. Not all NICs are
supported for every Nutanix platform. See the System Specifications for your platform to verify
which NICs are supported.

Table 24: On-Board Ports

NIC Link (LNK) LED Activity (ACT) LED

1 GbE dedicated IPMI Green: 100 Mbps Blinking yellow: activity

Yellow: 1 Gbps

1 GbE shared IPMI Green: 1 Gbps Blinking yellow: activity

Yellow: 100 Mbps

i Unit identification LED Blinking blue: UUID has been


activated.

Table 25: SuperMicro NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 1 GbE Green: 100 Mbps Blinking yellow: activity

Yellow: 1Gb/s
OFF: 10Mb/s or No
Connection

Dual-port 10G SFP+ Green: 10 Gb Blinking green: activity

Yellow: 1 Gb

Platform | Component Specifications | 39


Table 26: Silicom NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 10G SFP+ Green: all speeds Solid green: idle


Blinking green: activity

Quad-port 10G SFP+ Blue: 10 Gb Solid green: idle


Yellow: 1 Gb Blinking green: activity

Dual-port 10G BaseT Yellow: 1Gb/s Blinking green: activity

Green: 10Gb/s

Table 27: Mellanox NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 10G SFP+ Green: 10Gb speed with no Blinking yellow and green:
ConnectX-3 Pro traffic activity

Blinking yellow: 10Gb speed


with traffic
Not illuminated: no connection

Dual-port 40G SFP+ Solid green: good link Blinking yellow: activity
ConnectX-3 Pro
Not illuminated: no activity

Dual-port 10G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical
problem with link Blinking green: valid link with
active traffic

Dual-port 25G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical
problem with link Blinking green: valid link with
active traffic

Power Supply Unit (PSU) Redundancy and Node Configuration (G7


Platforms)
Note: Nutanix recommends that you carefully plan your AC power source needs, especially in
cases where the cluster consists of mixed models.
Nutanix recommends that you use 180~240V AC power source to secure PSU
redundancy. However, according to the table below and depending on the number of
nodes in the chassis, some NX Series platforms can work with redundant 100~210V AC
power supply units.

Platform | Component Specifications | 40


Table 28: PSU Redundancy and Node Configuration

Nutanix Model Number of Nodes Redundancy at 110V Redundancy at 208v

NX-1065-G7 1 to 2 YES YES

3 to 4 NO YES

NX-3060-G7 1 to 2 YES YES

3 to 4 NO YES

NX-3155G-G7 1, with GPU NO YES

1, without GPU YES YES

NX-8035-G7 1 YES YES

2 NO YES

NX-8150-G7 1 NO YES

NX-8155-G7 1 Ambient temperature YES


is 25° C or less: YES
Ambient temperature
is greater than 25° C:
NO

Nutanix DMI Information (G7 Platforms)


Format for Nutanix DMI strings.
VMware reads model information from the direct media interface (DMI) table.
For platforms with Intel Cascade Lake CPUs, Nutanix provides model information to the DMI
table in the following format:
NX-motherboard_idNIC_id-HBA_id-G7

motherboard-id has the following options:

Table 29:

Argument Option

T X11 multi-node motherboard

U X11 single-node motherboard

W X11 single-socket single-node motherboard

NIC_id has the following options:

Table 30:

Argument Option

00 uses on-board NIC

Platform | Component Specifications | 41


Argument Option

D1 dual-port 1G NIC

Q1 quad-port 1G NIC

DT dual-port 10GBaseT NIC

QT quad-port 10GBaseT NIC

DS dual-port SFP+ NIC

QS quad-port SFP+ NIC

HBA_id specifies the number of nodes and type of HBA controller. For example:

Table 31:

Argument Option

1NL3 single-node LSI3008

2NL3 2-node LSI3008

4NL3 4-node LSI3008

Table 32: Examples

DMI string Explanation Nutanix model

NX-TDT-4NL3-G7 X11 motherboard with on- NX-1065-G7, NX-3060-G7


board NIC, 4 nodes with
LSI3008 HBA controllers

NX-TDT-2NL3-G7 X11 motherboard with on- NX-8035-G7


board NIC, 2 nodes with
LSI3008 HBA controllers

NX-UDT-1NL3-G7 X11 motherboard with on- NX-3155G-G7, NX-8150-G7,


board NIC, 1 node with NX-8155-G7
LSI3008 HBA controller

Block Connection in a Customer Environment


After physically installing the Nutanix block in the datacenter, you can connect the network
ports to the customer's network.

• A switch that can auto-negotiate to 1Gbps is required for the IPMI ports on all G6 platforms.
• A 10 GbE switch that accepts SFP+ copper cables is required for most blocks.
• Nutanix recommends 10GbE connectivity for all nodes.
• The 10GbE NIC ports used on Nutanix nodes are passive. The maximum supported Twinax
cable length is 5 meters, per SFP+ specifications. For longer runs fiber cabling is required.

Platform | Component Specifications | 42


• Nutanix offers an SFP-10G-SR adapter to convert from SFP+ Twinax to optical. This allows a
switch with a 10GbE optical port to maintain the 10GbE link speed on an optical cable.
• Nutanix does not recommend the use of Fabric Extenders (FEX) or similar technologies
for production use cases. While initial, low-load implementations might run smoothly
with such technologies, poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612). Nutanix recommends
the use of 10Gbps, line-rate, non-blocking switches with larger buffers for production
workloads.

Tip: If you are configuring a cluster with multiple blocks, perform the following procedure on all
blocks before moving on to cluster configuration.

Connecting the Nutanix Block

Before you begin


This procedure requires the following components:

• One Nutanix block (installed in a rack but not yet connected to a power source)
• Customer networking hardware, including 10 GbE ports (SFP+ copper) and 1 GbE ports
• One 10 GbE SFP+ cable for each node (provided).
• One to three RJ45 cables for each node (customer provided)
• Two power cables (provided)

CAUTION: Note the orientation of the ports on the Nutanix block when you are cabling the ports.

Procedure

1. Connect the 10/100 or 10/100/1000 IPMI port of each node onto the customer switch with
RJ45 cables.
The switch that the IPMI ports connect to must be capable of auto-negotiation to 1Gbps.

2. Connect one or more 10 GbE ports of each node to the customer switch with the SFP+
cables. If you are using 10GBase-T, optimal resiliency and performance require CAT 6 cables.

3. (Optional) Connect one or more 1 GbE or 10 GBaseT ports of each node to the customer
switch with RJ45 cables. (For optimal 10GBaseT resiliency and performance use Cat 6
cables).

4. Connect both power supplies to a grounded, 208V power source (208V to 240V).

Tip: If you are configuring the block in a temporary location before installing it on a rack, the
input can be 120V. After moving the block into the datacenter, make sure that the block is
connected to a 208V to 240V power source, which is required for power supply redundancy.

5. Confirm that the link indicator light next to each IPMI port is illuminated.

Platform | Component Specifications | 43


6. Turn on all nodes by pressing the power button on the front of each node (the top button).
The top power LED illuminates and the fans are noticeably louder for approximately 2
minutes.

Figure 23: Control panel with power button


3
MEMORY CONFIGURATIONS
Supported Memory Configurations (G7 Platforms)
This topic shows DIMM installation order for all Nutanix G7 platforms. Use the rules and
guidelines in this topic to remove, replace, or add memory.

DIMM Restrictions
Each G7 node must contain only DIMMs of the same type, speed, and capacity.
DIMMs from different manufacturers can be mixed in the same node, but not in the same
channel:

• DIMM slots are arranged on the motherboard in groups called channels. On G7 platforms, all
channels contain two DIMM slots (one blue and one black). Within a channel, all DIMMs must
be from the same manufacturer.
• When replacing a failed DIMM, ensure that you are replacing the old DIMM like-for-like.
• When adding new DIMMs to a node, if the new DIMMs and the original DIMMs are from
different manufacturers, arrange the DIMMs so that the original DIMMs and the new DIMMs
are not mixed in the same channel.

• EXAMPLE: You have an NX-3060-G7 node that has twelve 32GB DIMMs for a total of
384GB. You decide to upgrade to twenty-four 32GB DIMMs for a total of 768GB. When
you remove the node from the chassis and look at the motherboard, you will see that
each CPU has six DIMMs, filling all blue DIMM slots, with all black DIMM slots empty.
Remove all DIMMs from one CPU and place them in the empty DIMM slots for the other
CPU. Then place all the new DIMMs in the DIMM slots for the first CPU, filling all slots. This
way you can ensure that the original DIMMs and the new DIMMs do not share channels.

Note: You do not need to balance numbers of DIMMs from different manufacturers within a
node, so long as they are never mixed in the same channel.

Memory Installation Order for G7 Platforms


A memory channel is a group of DIMM slots.
For G7 platforms, each CPU is associated with six memory channels. Each memory channel
contains two DIMM slots. Memory channels have one blue slot and one black slot each.

Platform | Memory Configurations | 45


Figure 24: DIMM slots for a G7 multi-node motherboard

Figure 25: DIMM slots for a G7 single-node motherboard

Note: DIMM slots on the motherboard are most commonly labeled as A1, A2, and so on. However,
some software tools report DIMM slot labels in a different format, such as 1A, 2A, or CPU1, CPU2,
or DIMM1, DIMM2.

Platform | Memory Configurations | 46


Table 33: DIMM Installation Order for G7 Platforms

Number of DIMMs Slots to use

6 CPU1: A1, B1, C1 (blue slots)


CPU2: A1, B1, C1 (blue slots)

8 CPU1: A1, B1, D1, E1 (blue slots)


CPU2: A1, B1, D1, E1 (blue slots)

12 CPU1: A1, B1, C1, D1, E1, F1 (blue slots)


CPU2: A1, B1, C1, D1, E1, F1 (blue slots)

16 CPU1: A1, B1, D1, E1 (blue slots)


CPU1: A2, B2, D2, E2 (black slots)
CPU2: A1, B1, D1, E1 (blue slots)
CPU2: A2, B2, D2, E2 (black slots)

24 Fill all slots.

Platform | Memory Configurations | 47


COPYRIGHT
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as


nutanix) in the system shell.

root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a


log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

Platform | Copyright | 48
Interface Target Username Password

IPMI web interface or ipmitool Nutanix node ADMIN ADMIN

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: December 11, 2019 (2019-12-11T13:50:51-08:00)

Platform | Copyright | 49

You might also like