System Specs G7 Single Node
System Specs G7 Single Node
Visio Stencils..................................................................................................................iii
  1. System Specifications............................................................................................ 4
         Node Naming (NX-3155G-G7).............................................................................................................................. 4
              NX-3155G-G7 System Specifications......................................................................................................7
              NX-3155G-G7 GPU Specifications...........................................................................................................11
              Field-Replaceable Unit List (NX-3155G-G7)...................................................................................... 12
         Node Naming (NX-8150-G7)................................................................................................................................15
              NX-8150-G7 System Specifications.......................................................................................................19
              Field-Replaceable Unit List (NX-8150-G7)........................................................................................23
         Node Naming (NX-8155-G7)............................................................................................................................... 25
              NX-8155-G7 System Specifications......................................................................................................30
              Field-Replaceable Unit List (NX-8155-G7)........................................................................................ 35
  2. Component Specifications................................................................................ 38
         Controls and LEDs for Single-node G7 Platforms..................................................................................... 38
         LED Meanings for Network Cards....................................................................................................................39
         Power Supply Unit (PSU) Redundancy and Node Configuration (G7 Platforms)......................... 40
         Nutanix DMI Information (G7 Platforms)....................................................................................................... 41
         Block Connection in a Customer Environment...........................................................................................42
               Connecting the Nutanix Block.............................................................................................................. 43
  3. Memory Configurations......................................................................................45
         Supported Memory Configurations (G7 Platforms).................................................................................. 45
  Copyright...................................................................................................................... 48
         License......................................................................................................................................................................... 48
         Conventions............................................................................................................................................................... 48
         Default Cluster Credentials................................................................................................................................. 48
         Version......................................................................................................................................................................... 49
VISIO STENCILS
 Visio stencils for Nutanix products are available on VisioCafe.
    Hybrid SSD and HDD                   Two SSDs and four HDDs, with six empty drive slots. The
                                         two SSDs contain the controller VM and metadata, while
                                         the four HDDs are data-only.
All-flash Two, four, or six SSDs, with six empty drive slots.
The NX-3155G-G7 supports one, two, or three NICs. The supported NIC options follow.
The NX-3155G-G7 supports the following GPU cards. You cannot mix GPU types in the same
chassis.
    CAUTION: Do not use the NVIDIA Tesla M10 GPU card on hypervisors running more than 1TB of
    total memory, due to NVIDIA M-series architectural limitations. The T4 and V100 GPU cards are
    not subject to this limitation.
Memory
                 • DDR4-2933, 1.2V, 32 GB, RDIMM
                           Note: Each node must contain only DIMMs of the same type, speed, and
                           capacity.
                       6 x 32 GB = 192 GB
                       8 x 32 GB = 256 GB
                       12 x 32 GB = 384 GB
                       16 x 32 GB = 512 GB
                       24 x 32 GB = 768 GB
                 • DDR4-2933, 1.2V, 64 GB, RDIMM
                           Note: Each node must contain only DIMMs of the same type, speed, and
                           capacity.
                       12 x 64 GB = 768 GB
                       16 x 64 GB = 1 TB
                       24 x 64 GB = 1.5 TB
                 • 1.92 TB
                 • 3.84 TB
                 • 7.68 TB
                 4 x HDD
                 • 6 TB
                 • 8 TB
                 • 12 TB
                 • 1.92 TB
                 • 3.84 TB
                 4 x HDD
                 • 6 TB
                 • 8 TB
                 • 1.92 TB
                 • 3.84 TB
                 • 7.68 TB
                 • 1.92 TB
                 • 3.84 TB
Operating        10-30C
temperature
Non-operating    -40-70C
temperature
Certifications
• Energy Star
Table 7: Minimum Firmware and Software Versions When Using a GPU Card
Firmware or Software NVIDIA Tesla M10 NVIDIA Tesla V100 NVIDIA Tesla T4
   Hypervisor
                          • AHV                    • AHV                     • AHV
                          • ESXi 6.0               • ESXi 6.0                • ESXi 6.0
                          • ESXi 6.5               • ESXi 6.5                • ESXi 6.5
                                  CAUTION: Do
                                  not use an M10
                                  GPU card on
                                  hypervisors
                                  running more
                                  than 1TB of
                                  total memory.
HDD, SATA, 6 TB
HDD, SATA, 8 TB
HDD, SATA, 12 TB
Rail, 2U
You can install one to three NICs. All installed NICs must be identical. Always populate the NIC
slots in order: NIC1, NIC2, NIC3.
   CPU
                     • 2 x Intel Xeon Platinum_8280M, 28-core Cascade Lake @ 2.7 GHz (56
                       cores per node)
                     • 2 x Intel Xeon Platinum_8280, 28-core Cascade Lake @ 2.7 GHz (56 cores
                       per node)
                     • 2 x Intel Xeon Platinum_8268, 24-core Cascade Lake @ 2.9 GHz (48 cores
                       per node)
                     • 2 x Intel Xeon Platinum_8260M, 24-core Cascade Lake @ 2.4 GHz (48
                       cores per node)
                     • 2 x Intel Xeon Gold_6254, 18-core Cascade Lake @ 3.1 GHz (36 cores per
                       node)
                     • 2 x Intel Xeon Gold_6242, 16-core Cascade Lake @ 2.8 GHz (32 cores per
                       node)
                     • 2 x Intel Xeon Gold_6244, 8-core Cascade Lake @ 3.6 GHz (16 cores per
                       node)
                        Note: 128GB DIMMs are supported only with M-type CPUs, such as
                        the 8280M. 128GB DIMMs are not supported with hybrid SSD and HDD
                        configurations. Each node must contain only DIMMs of the same type,
                        speed, and capacity.
                      12 x 128 GB = 1.5 TB
                      16 x 128 GB = 2 TB
                      24 x 128 GB = 3 TB
                • DDR4-2933, 1.2V, 64 GB, RDIMM
                        Note: Each node must contain only DIMMs of the same type, speed, and
                        capacity.
                      6 x 64 GB = 384 GB
                      8 x 64 GB = 512 GB
                      12 x 64 GB = 768 GB
                      16 x 64 GB = 1 TB
                      24 x 64 GB = 1.5 TB
                • DDR4-2933, 1.2V, 32 GB, RDIMM
                        Note: Each node must contain only DIMMs of the same type, speed, and
                        capacity.
                      4 x 32 GB = 128 GB
                      6 x 32 GB = 192 GB
                      8 x 32 GB = 256 GB
                      12 x 32 GB = 384 GB
                      16 x 32 GB = 512 GB
                      24 x 32 GB = 768 GB
                • 1.92 TB
                • 3.84 TB
                • 7.68 TB
                • 1.92 TB
                • 3.84 TB
                 • 1.92 TB
                 • 3.84 TB
                 4 x NVMe
                 • 2 TB
                 • 4 TB
Operating         10-35C
temperature
Non-operating     -40-70C
temperature
Certifications
• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
NVMe, 2 TB
NVMe, 4 TB
Fan, 80 mm × 80 mm × 38 mm
Rail
Drive Configurations
     Note: If you order a hybrid SSD and HDD configuration, you can later convert the platform to
     an all-SSD configuration. However, if you begin with either a hybrid SSD and HDD or an all-SSD
     configuration, you cannot later convert to an SSD with NVMe configuration.
Certain capacities of HDDs can only mix with certain capacities of SSDs.
6 TB or 8 TB 1.92 TB or 3.84 TB
12 TB 3.84 TB
Supported configurations
2 SSDs in slots 1 and 2; 4 HDDs in slots 3 through 6; all other slots empty
2 SSDs in slots 1 and 2; 6 HDDs in slots 3 through 8; all other slots empty
2 SSDs in slots 1 and 2; 8 HDDs in slots 3 through 10; all other slots empty
Supported configurations
     Populate the drive slots two by two, in numerical order. For a four-drive partial
     configuration, put the drives in slots 1 through 4; for a six-drive partial configuration, put
     the drives in slots 1 through 6; and so on.
     Figure 17: All-SSD configuration
Supported configurations
4 SSDs in slots 1 through 4; 4 NVMe drives in slots 9 through 12; all other slots empty
6 SSDs in slots 1 through 6; 4 NVMe drives in slots 9 through 12; all other slots empty
Put four NVMe drives in slots 9 through 12. No other NVMe configurations are supported.
You can install one, two, or three NICs. All installed NICs must be identical. Always populate the
NIC slots in order: NIC1, NIC2, NIC3.
Memory
         • DDR4-2933, 1.2V, 64 GB, RDIMM
                Note: Each node must contain only DIMMs of the same type, speed, and
                capacity.
              4 x 64 GB = 256 GB
              6 x 64 GB = 384 GB
              8 x 64 GB = 512 GB
              12 x 64 GB = 768 GB
              16 x 64 GB = 1 TB
              24 x 64 GB = 1.5 TB
         • DDR4-2933, 1.2V, 32 GB, RDIMM
                Note: Each node must contain only DIMMs of the same type, speed, and
                capacity.
              4 x 32 GB = 128 GB
              6 x 32 GB = 192 GB
              8 x 32 GB = 256 GB
              12 x 32 GB = 384 GB
              16 x 32 GB = 512 GB
              24 x 32 GB = 768 GB
                 • 1.92 TB
                 • 3.84 TB
                 • 7.68 TB
                 4, 6, 8, or 10 x HDD
                 • 6 TB
                 • 8 TB
                 • 12 TB
                 • 1.92 TB
                 • 3.84 TB
                 4, 6, 8, or 10 x HDD
                 • 6 TB
                 • 8 TB
                 • 1.92 TB
                 • 3.84 TB
                 • 7.68 TB
                 • 1.92 TB
                 • 3.84 TB
                 • 1.92 TB
                 • 3.84 TB
                 • 7.68 TB
                 4 x NVMe
                 • 2 TB
                 • 4 TB
Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (both slots filled with NICs)
Operating        10-35C
temperature
Non-operating    -40-70C
temperature
Certifications
• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
NVMe, 2 TB
NVMe, 4 TB
Fan, 80 mm × 80 mm × 38 mm
Rails
Unit Identifier button (UID) Press button to illuminate the i LED (blue)
    Top LED: Activity                               Blue or green, blinking = I/O activity, off = idle
    Bottom LED: Status                              Red solid = failed drive, On five seconds after
                                                    boot = power on
LED Function
For LED states for add-on NICs, see LED Meanings for Network Cards on page 39.
Yellow: 1 Gbps
                                  Yellow: 1Gb/s
                                  OFF: 10Mb/s or No
                                  Connection
Yellow: 1 Gb
Green: 10Gb/s
   Dual-port 10G SFP+                 Green: 10Gb speed with no         Blinking yellow and green:
   ConnectX-3 Pro                     traffic                           activity
   Dual-port 40G SFP+                 Solid green: good link            Blinking yellow: activity
   ConnectX-3 Pro
                                                                        Not illuminated: no activity
   Dual-port 10G SFP28                Solid yellow: good link           Solid green: valid link with no
   ConnectX-4 Lx                                                        traffic
                                      Blinking yellow: physical
                                      problem with link                 Blinking green: valid link with
                                                                        active traffic
   Dual-port 25G SFP28                Solid yellow: good link           Solid green: valid link with no
   ConnectX-4 Lx                                                        traffic
                                      Blinking yellow: physical
                                      problem with link                 Blinking green: valid link with
                                                                        active traffic
3 to 4 NO YES
3 to 4 NO YES
2 NO YES
NX-8150-G7 1 NO YES
Table 29:
Argument Option
Table 30:
Argument Option
D1 dual-port 1G NIC
Q1 quad-port 1G NIC
HBA_id specifies the number of nodes and type of HBA controller. For example:
Table 31:
Argument Option
  • A switch that can auto-negotiate to 1Gbps is required for the IPMI ports on all G6 platforms.
  • A 10 GbE switch that accepts SFP+ copper cables is required for most blocks.
  • Nutanix recommends 10GbE connectivity for all nodes.
  • The 10GbE NIC ports used on Nutanix nodes are passive. The maximum supported Twinax
    cable length is 5 meters, per SFP+ specifications. For longer runs fiber cabling is required.
        Tip: If you are configuring a cluster with multiple blocks, perform the following procedure on all
        blocks before moving on to cluster configuration.
   • One Nutanix block (installed in a rack but not yet connected to a power source)
   • Customer networking hardware, including 10 GbE ports (SFP+ copper) and 1 GbE ports
   • One 10 GbE SFP+ cable for each node (provided).
   • One to three RJ45 cables for each node (customer provided)
   • Two power cables (provided)
CAUTION: Note the orientation of the ports on the Nutanix block when you are cabling the ports.
Procedure
   1. Connect the 10/100 or 10/100/1000 IPMI port of each node onto the customer switch with
      RJ45 cables.
      The switch that the IPMI ports connect to must be capable of auto-negotiation to 1Gbps.
   2. Connect one or more 10 GbE ports of each node to the customer switch with the SFP+
      cables. If you are using 10GBase-T, optimal resiliency and performance require CAT 6 cables.
   3. (Optional) Connect one or more 1 GbE or 10 GBaseT ports of each node to the customer
      switch with RJ45 cables. (For optimal 10GBaseT resiliency and performance use Cat 6
      cables).
4. Connect both power supplies to a grounded, 208V power source (208V to 240V).
           Tip: If you are configuring the block in a temporary location before installing it on a rack, the
           input can be 120V. After moving the block into the datacenter, make sure that the block is
           connected to a 208V to 240V power source, which is required for power supply redundancy.
5. Confirm that the link indicator light next to each IPMI port is illuminated.
    DIMM Restrictions
    Each G7 node must contain only DIMMs of the same type, speed, and capacity.
    DIMMs from different manufacturers can be mixed in the same node, but not in the same
    channel:
    • DIMM slots are arranged on the motherboard in groups called channels. On G7 platforms, all
      channels contain two DIMM slots (one blue and one black). Within a channel, all DIMMs must
      be from the same manufacturer.
    • When replacing a failed DIMM, ensure that you are replacing the old DIMM like-for-like.
    • When adding new DIMMs to a node, if the new DIMMs and the original DIMMs are from
      different manufacturers, arrange the DIMMs so that the original DIMMs and the new DIMMs
      are not mixed in the same channel.
      • EXAMPLE: You have an NX-3060-G7 node that has twelve 32GB DIMMs for a total of
        384GB. You decide to upgrade to twenty-four 32GB DIMMs for a total of 768GB. When
        you remove the node from the chassis and look at the motherboard, you will see that
        each CPU has six DIMMs, filling all blue DIMM slots, with all black DIMM slots empty.
        Remove all DIMMs from one CPU and place them in the empty DIMM slots for the other
        CPU. Then place all the new DIMMs in the DIMM slots for the first CPU, filling all slots. This
        way you can ensure that the original DIMMs and the new DIMMs do not share channels.
         Note: You do not need to balance numbers of DIMMs from different manufacturers within a
         node, so long as they are never mixed in the same channel.
Note: DIMM slots on the motherboard are most commonly labeled as A1, A2, and so on. However,
some software tools report DIMM slot labels in a different format, such as 1A, 2A, or CPU1, CPU2,
or DIMM1, DIMM2.
License
   The provision of this software to you does not grant any licenses or other rights under any
   Microsoft patents with respect to anything other than the file server implementation portion of
   the binaries for this software, including no licenses or any other rights in any hardware or any
   devices or software that are used to communicate with or in connection with this software.
Conventions
   Convention                     Description
   root@host# command             The commands are executed as the root user in the vSphere or
                                  Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
                                                                            Platform | Copyright | 48
   Interface                         Target                      Username        Password
Version
  Last modified: December 11, 2019 (2019-12-11T13:50:51-08:00)
Platform | Copyright | 49