NX-3060-G9 System
Specifications
Platform ANY
NX-3060-G9
December 4, 2024
Contents
1. System Specifications.........................................................................3
Node Naming (NX-3060-G9)........................................................................................................3
NX-3060-G9 System Specifications................................................................................... 5
2. Component Specifications................................................................ 12
Controls and LEDs for Multinode Platforms.............................................................................. 12
Network Card LED Description................................................................................................ 14
Power Supply Unit Redundancy and Node Configuration..........................................................16
Nutanix DMI Information........................................................................................................... 16
3. Memory Configurations..................................................................... 18
Supported Memory Configurations........................................................................................... 18
4. Nutanix Hardware Naming Convention..............................................24
Copyright..............................................................................................26
License.....................................................................................................................................26
Conventions..............................................................................................................................26
Default Cluster Credentials...................................................................................................... 26
Version..................................................................................................................................... 27
1
SYSTEM SPECIFICATIONS
Node Naming (NX-3060-G9)
Nutanix assigns a name to each node in a block, which varies by product type.
NX-3060-G9 platforms have one, two, three, or four nodes per block.
• Node A
• Node B
• Node C
• Node D
The NX-3060-G9 supports solid-state drives (SSD) and non-volatile memory express drives (NVMe).
The first two SSDs in each node contain the Controller VM (CVM) and metadata. For a configuration where
NVMe drives are present, you must install the NVMe drives in slots 5 and 6 (the two rightmost drive slots in
each node).
Nutanix supports self-encryption (SED) for SSD, but not for NVMe.
Note: Mixed SSD + NVMe configuration is offered only as a factory installation. You cannot convert an all-
SSD node to a mixed SSD + NVMe node in the field.
Figure 1: NX-3060-G9 Front Panel
Table 1: Supported Drive Configurations
All-SSD Two SSDs and four empty slots per node
Note: When partially populating Four SSDs and two empty slots per node
a node with SSDs, load the drive
slots in order from left to right. Six SSDs per node
Platform | System Specifications | 3
Mixed SSD + NVMe Two NVMe drives and four SSDs per node
Note: You must install the NVMe drives in slots 5 and 6 (the
two rightmost drive slots in each node).
Figure 2: NX-3060-G9 Back Panel
Platform | System Specifications | 4
Figure 3: NX-3060-G9 Exploded View
NX-3060-G9 System Specifications
Table 2: System Characteristics
Boot Device Boot Drive
• 2 x 512GB M.2 Boot Device
Chassis Chassis
• 2U4N SFF Chassis
Platform | System Specifications | 5
CPU Processor
Dual Intel Xeon® 5th Gen. (Emerald Rapids)
• 2 x Intel Xeon® Silver 4509Y [8 cores / 2.60 GHz / 125 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Silver 4510 [12 cores / 2.40 GHz / 150 W]
• 2 x Intel Xeon® Silver 4514Y [16 cores / 2.00 GHz / 150 W]
Dual Intel Xeon® 4th Gen. (Sapphire Rapids)
• 2 x Intel Xeon® Silver 4410T [10 cores / 2.70 GHz / 150 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Silver 4410Y [12 cores / 2.00 GHz / 150 W]
• 2 x Intel Xeon® Silver 4416+ [20 cores / 2.00 GHz / 165 W]
• 2 x Intel Xeon® Gold 5415+ [8 cores / 2.90 GHz / 150 W], Excluded from ENERGY
STAR certification.
• 2 x Intel Xeon® Gold 5416S [16 cores / 2.00 GHz / 150 W]
Memory
Note:
• The 96 GB memory configuration is only available with Emerald Rapids
processors.
• 16x 128 GB Memory configuration is not supported
• The minimum memory configuration required for Energy STAR 4.0
certification is 512 GB. If memory is less than 512 GB, the platform is not
Energy Star certified, even if the CPU is certified.
64GB DIMM
• 4 x 64 GB = 256 GB
• 8 x 64 GB = 512 GB
• 12 x 64 GB = 768 GB
• 16 x 64 GB = 1024 GB
96GB DIMM
• 16 x 96 GB = 1536 GB
128GB DIMM
• 4 x 128 GB = 512 GB
• 8 x 128 GB = 1024 GB
• 12 x 128 GB = 1536 GB
Platform | System Specifications | 6
Network Ports on Serverboard
• 1x 1GbE Dedicated IPMI
AIOM
• 1 x 2P 10GBase-T (Port 1 is shared IPMI)
• 1 x 2P 10GBase-T + 2P SFP+ (Port 1 is shared IPMI)
Add on NICs in PCIe slots
• 0, 1 or 2 x 10GbE 4P NIC
• 0, 1 or 2 x 10GBaseT 2P NIC
• 0, 1 or 2 x 10GBaseT 4P NIC
• 0, 1 or 2 x 25GbE 2P NIC
• 0, 1 or 2 x 25GbE 4P NIC
Network Cables Network Cable
• 1M-SFP28
• 3M-SFP28
• 3M-SFP+
• 5M-SFP28
• 5M-SFP+
Power Cable Power Cable
• 2 x C20/21 6ft Power Cable
Power Supply Power Supply
• 2 x PWS,3000W,TITANIUM
Server Server
• 1, 2, 3 or 4 x NX-3060-G9 Server
Storage
All SSD 2, 4 or 6 x SSD
• 1.92TB
• 3.84TB
• 7.68TB
Platform | System Specifications | 7
NVMe+SSD
Note:
• The SSD and NVMe drives must be equal in capacity.
2 x NVMe
• 1.92TB
• 3.84TB
• 7.68TB
4 x SSD
• 1.92TB
• 3.84TB
• 7.68TB
All SSD SED 2, 4 or 6 x SSD
• 3.84TB
• 7.68TB
TPM TPM
• 0 or 1 x Unprovisioned Trusted Platform Module
Transceiver Transceiver
• SR SFP+ Transceiver
VGA 1 x VGA connector per node (15-pin female)
Chassis fans 4x 80 mm heavy duty fans with PWM fan speed controls
Platform | System Specifications | 8
Table 3: Block, power and electrical
Block
Note:
• Maximum measurements are shown. Width is from ear to ear; depth is
from ears to pull rings.
• Rack Units : 2 U
• Width : 451 mm
• Height : 88 mm
• Weight : 39.4 kg
• Depth : 778 mm
Package Package with rack rails and accessories
• Weight : 51.9 kg
Shock 20G, square wave, one shock on each side
• Non-Operating : 10 ms
5G, half-sine, one shock on each side
• Operating : 10 ms
Thermal Block with 2 x 16/12/10-core CPU 150W, 16 x 64GB x 4 nodes
Dissipation
(Calculated) • Typical : 7293 BTU/hr
• Maximum : 9724 BTU/hr
Block with 2 x 16/12/10-core CPU 150W, 8 x 64GB x 4 nodes
• Typical : 6858 BTU/hr
• Maximum : 9144 BTU/hr
Block with 2 x 20-core CPU 165W, 16 x 64GB x 4 nodes
• Typical : 7677 BTU/hr
• Maximum : 10236 BTU/hr
Block with 2 x 20-core CPU 165W, 8 x 64GB x 4 nodes
• Typical : 7575 BTU/hr
• Maximum : 10100 BTU/hr
Block with 2 x 8-core CPU 150W, 16 x 64GB x 4 nodes
• Typical : 6980 BTU/hr
• Maximum : 9307 BTU/hr
Platform | System Specifications | 9
Vibration at 5 to 200Hz. Approx. 30 min./axis
(Random)
• Non-Operating : 0.98 Grms
at 5 to 500 Hz. Approx. 15 min./axis
• Operating : 0.21 Grms
Vibration at 5 to 200 Hz. Approx. 15 min./axis
(Sinusoidal)
• Non-Operating : 0.5 G
at 5 to 200 Hz. Approx. 15 min./axis
• Operating : 0.25 G
Power
Note:
consumption
(Calculated)
• For the power consumption calculations, Max NIC and disk configurations
are considered.
Block with 2 x 20-core CPU 165W, 16 x 64GB x 4 nodes
• Maximum: 3000 VA
• Typical: 2250 VA
Block with 2 x 20-core CPU 165W, 8 x 64GB x 4 nodes
• Maximum: 2960 VA
• Typical: 2220 VA
Block with 2 x 16/12/10-core CPU 150W, 16 x 64GB x 4 nodes
• Maximum: 2850 VA
• Typical: 2136 VA
Block with 2 x 8-core CPU 150W, 16 x 64GB x 4 nodes
• Maximum: 2728 VA
• Typical: 2046 VA
Block with 2 x 16/12/10-core CPU 150W, 8 x 64GB x 4 nodes
• Maximum: 2680 VA
• Typical: 2010 VA
Operating Operating temperature : 10-25C
environment
Non-Operating temperature : -40-70C
Operating relative humidity : 8-90%
Non-operating relative humidity : 5-95%
Platform | System Specifications | 10
Certifications
• BIS
• BSMI
• CE
• CSA
• CSAus
• EAC
• Energy Star
• FCC
• ICES
• KCC
• RCM
• Reach
• RoHS
• S-MARK
• SABS
• SII
• UKCA
• UL
• VCCI-A
• WEEE
• cUL
Platform | System Specifications | 11
2
COMPONENT SPECIFICATIONS
Controls and LEDs for Multinode Platforms
Figure 4: Front of Chassis LEDs
Note: The network activity LED indicator is only applicable to the AIOM (onboard NIC). It blinks only when
one of the AIOM ports is connected to the network.
Table 4: LEDs on the Front of the Chassis
Name Color Function
Power button Green Power On/Off
Network activity Flashing green Network activity
Alert indicator Solid red Overheating condition
Blinking red (1 Hz) Fan failure
Blinking red (0.25 - 0.5 Hz range) Power failure
Solid blue UID activated locally
Blinking Blue UID activated using IPMI
Platform | Component Specifications | 12
Name Color Function
UID button Blue Unit identification (UID) button
turns ON and OFF blue light
function of Information LED and
a blue LED in the rear of the
chassis to locate the node in large
racks.
Table 5: Drive LEDs
Activity Blue or green: Blinking = I/O activity, off = idle
Status Solid red = failed drive, On 5 seconds after boot =
power on
Back Panel LEDs
The back panel LED locations and their behavior are identical for all three multinode platforms (NX-1065-
G9, NX-3035-G9, and NX-3060-G9). The following image shows the LED locations for NX-3060-G9
platform, but the same applies to other multinode platforms.
Figure 5: Back Panel LEDs (NX-3060-G9 Shown)
Name Color Function
IPMI, Link LED (Left) Solid Green 100 Mbps
Solid Amber 1 Gbps
IPMI, Activity LED (Right) Blinking Amber Active
AIOM, Link LED (Left) Off No link
Green Linked at 10 Gb/s
Amber Linked at 1 Gb/s
AIOM, Activity LED (Right) Off No activity
Platform | Component Specifications | 13
Name Color Function
Blinking Green Link up (traffic flowing)
Locator LED (UID) Blue Blinking - Node identified
Table 6: Power Supply LED Indicators
Power supply condition LED status
No AC power to all power supplies Off
Power supply critical events that cause a shutdown: Failure, Over Steady amber
Current Protection, Over Voltage Protection, Fan Fail, Over
Temperature Protection, Under Voltage Protection.
Power supply warning events. Power supply continues to operate. High Blinking amber (1 Hz )
temperature, over voltage, under voltage and other conditions.
When AC is present only: 12VSB on (PS off) or PS in sleep state Blinking green (1 Hz) minute
Output on and OK Solid green
AC cord unplugged Solid amber
Power supply firmware updating mode Blinking green (2 Hz)
For LED states for add-on NICs, see Network Card LED Description on page 14.
Network Card LED Description
Different NIC manufacturers use different LED colors and blink states. Not all NICs are supported for every
Nutanix platform. See the system specifications for your platform to verify which NICs are supported.
Table 7: SuperMicro NICs
NIC Link (LNK) LED Activity (ACT) LED
Dual/Quad Port 25GbE Green: 25 GbE Blinking: activity
Amber: less than 25 GbE Off : No ativity
OFF: No link
Dual Port 100GbE Green: 100 GbE Blinking: activity
Amber: less than 100 GbE Off : No activity
Platform | Component Specifications | 14
Table 8: Intel NICs
NIC Link (LNK) LED Activity (ACT) LED
Quad port 10G BaseT Green: 10 Gbps Blinking green: Transmitting or
receiving data
Yellow: 5/2.5/1Gbps
Off : No link
OFF: 100 Mbps
Quad port 10G SFP+ Green: 10 Gbps Blinking: Activity
Yellow: 1 Gbps Off : No activity
Dual Port 25G Green: 10 Gbps Green: SFP LAN port active
Amber: 10 Gbps
Table 9: Broadcom NICs
NIC Link (LNK) LED Activity (ACT) LED
Dual Port 25G Green: Linked at 25 Gb/s Blinking green: Traffic Flowing
Activity
Yellow: Linked at 10 Gb/s or 1 Gb/
s Off: No activity
Off: No link
Dual Port 10G Green: Linked at 10 Gb/s Blinking green: Link up (traffic
flowing)
Amber: Linked at 1 Gb/s
Off: No activity
Off: No link
Table 10: Mellanox NICs
NIC Bi-color LED (Yellow/ Single color LED Indicates
Green) (Green)
Dual port CX-6 25G 1 Hz blinking Yellow Off Beacon command for
locating the adapter card
Dual port CX-6 100G
4 Hz blinking Yellow ON Error
Indicates an error with
the link
Blinking Green LED Blinking Physical Activity
Solid Green LED ON Link Up
Platform | Component Specifications | 15
Power Supply Unit Redundancy and Node Configuration
Note: Carefully plan your AC power source needs, especially in cases where the cluster consists of mixed
models. Nutanix recommends that you use 208 V - 240 V AC power source to ensure Power Supply Unit
(PSU) redundancy.
Table 11: PSU Redundancy and Node Configuration
Nutanix model Number of nodes Redundancy at 110 V Redundancy at 208-240
V
NX-1065-G9 1, 2, or 3 No Yes
NX-3035-G9 1 or 2 No Yes
NX-3060-G9 1, 2, 3, or 4 No Yes
NX-1150S-G9 1 Yes Yes
NX-1175S-G9 1 Yes Yes
NX-3155-G9 1 No Yes
NX-8150-G9 1 No Yes
NX-8155-G9 1 No Yes
NX-8155A-G9 1 No Yes
NX-8170-G9 1 No Yes
NX-9151-G9 1 Not supported Yes (2+1 PSU
redundancy)
Two PSUs must remain
functional at all times.
Caution:
For all G9 platforms except NX-1175S-G9 and NX-1150S-G9: When the input power source is 110
V, a single PSU failure will cause all nodes in a block to power off.
For NX-1175S-G9 and NX-1150S-G9: The block can tolerate a single PSU failure when
connected to a 110 V input power source.
Nutanix DMI Information
vSphere reads model information from the direct media interface (DMI) table.
For NX-G9 Series platforms, Nutanix provides model information to the DMI table in the following format:
NX-motherboard_idNIC_id-HBA_id-G9
motherboard-id has the following options:
Argument Option
T X13 multi-node motherboard
U X13 single-node motherboard
Platform | Component Specifications | 16
Argument Option
W X13 single-socket single-node motherboard
NIC_id has the following options:
Argument Option
D1 dual-port 1G NIC
Q1 quad-port 1G NIC
DT dual-port 10GBaseT NIC
QT quad-port 10GBaseT NIC
DS dual-port SFP+ NIC
QS quad-port SFP+ NIC
HBA_id specifies the number of nodes and the type of HBA controller. For example:
Argument Option
1NL3 single-node LSI3808
2NL3 2-node LSI3808
4NL3 4-node LSI3808
Table 12: Examples
DMI string Explanation Nutanix model
NX-TDT-4NL3-G9 X13 motherboard with dual-port NX-3060-G9
10GBase-T NIC, 4 nodes with
LSI3808 HBA controllers
NX-TDT-2NL3-G9 X13 motherboard with dual-port NX-3035-G9
10GBase-T NIC, 2 nodes with
LSI3808 HBA controllers
Platform | Component Specifications | 17
3
MEMORY CONFIGURATIONS
Supported Memory Configurations
DIMM installation information for all Nutanix G9 platforms.
DIMM Restrictions
DIMM capacity
Each G9 node must contain only DIMMs of the same capacity. For example, you cannot mix 32 GB
DIMMs and 64 GB DIMMs in the same node.
DIMM speed
G9 platforms that use Intel Sapphire Rapids processors support 4800 MT/s DIMMs. The speed of
the CPU / DIMM interface is 4000 MT/s or 4800 MT/s based on the CPU SKU used.
G9 platforms that use Intel Emerald Rapids processors ship with 5600 MT/s DIMMs. The speed of
the CPU / DIMM interface depends on the CPU class and on whether you have installed one DIMM
per memory channel (1DPC) or two DIMMs per memory channel (2DPC).
• Platinum-8xxx: max 5600 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Gold-6xxx: max 5200 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Gold-5xxx: max 4800 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Silver-4xxx: max 4400 MT/s at both 1DPC and 2DPC.
• Bronze-3xxx (single socket boards only): max 4400 MT/s at both 1DPC and 2DPC.
If you install a 5600 MT/s DIMM in a G9 platform that uses Intel Sapphire Rapids processors, it runs
at a max of 4800 MT/s.
DIMM manufacturer
Each G9 node must contain only DIMMs from the same manufacturer.
Memory Installation Order for G9 Multi-Node Platforms
A memory channel is a group of DIMM slots.
For G9 multi-node platforms, each CPU is associated with eight memory channels that contain one blue
slot each.
Platform | Memory Configurations | 18
Figure 6: DIMM Slots for a G9 Multi-node Server Board
Table 13: DIMM Installation Order for G9 Multi-Node Platforms
Number of Slots to Use Supported Capacities
DIMMs
NX-1065-G9 NX-3035-G9 NX-3060-G9
4 CPU1: A1, G1 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB, 128 GB
GB
CPU2: A1, G1
8 CPU1: A1, C1, E1, G1 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB, 128 GB
GB
CPU2: A1, C1, E1, G1
12 CPU1: A1, C1, D1, E1, F1, 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB, 128 GB
G1 GB
CPU2: A1, C1, D1, E1, F1,
G1
16 Fill all slots. 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB
GB
Memory Installation Order for G9 Single-Node Platforms
A memory channel is a group of DIMM slots.
For G9 single-node platforms, each CPU is associated with eight memory channels. Each memory channel
contains two DIMM slots, one blue slot and one black slot, for a total of 32 DIMM slots.
Platform | Memory Configurations | 19
Figure 7: DIMM Slots for a G9 Single-Node Server Board
Table 14: DIMM Installation Order for Single-Node Hyper G9 Platforms
Number Slots to Use Supported Capacities
of
DIMMs NX-8155-G9 NX-8170-G9
4 CPU1: A1, G1 (blue slots) 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
CPU2: A1, G1 (blue slots)
Platform | Memory Configurations | 20
Number Slots to Use Supported Capacities
of
DIMMs NX-8155-G9 NX-8170-G9
8 CPU1: A1, C1, E1, G1 (blue slots) 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
CPU2: A1, C1, E1, G1 (blue slots)
12 CPU1: A1, C1, D1, E1, F1, G1 (blue slots) 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
CPU2: A1, C1, D1, E1, F1, G1 (blue slots)
16 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
slots) GB
CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue
slots)
24 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
slots) GB
CPU1: A2, C2, E2, G2 (black slots)
CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue
slots)
CPU2: A2, C2, E2, G2 (black slots)
32 Fill all slots. 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
Memory Installation Order for NX-8155A-G9 Platform
A memory channel is a group of DIMM slots.
For G9 single-node AMD hyper platform, each CPU is associated with twelve memory channels. Each
memory channel contains one DIMM slots, for a total of 24 DIMM slots.
Platform | Memory Configurations | 21
Figure 8: DIMM Slots for NX-8155A-G9 Platform
Table 15: DIMM Installation Order for NX-8155A-G9 Platform
Number of Slots to Use Supported Capacities
DIMMs
4 CPU1: A1, G1 32 GB, 64 GB, 128 GB
CPU2: A1, G1
Platform | Memory Configurations | 22
Number of Slots to Use Supported Capacities
DIMMs
8 CPU1: A1, C1, G1, I1 32 GB, 64 GB, 128 GB
CPU2: A1, C1, G1, I1
12 CPU1: A1, B1, C1, G1, H1, I1 32 GB, 64 GB, 128 GB
CPU2: A1, B1, C1, G1, H1, I1
16 CPU1: A1, B1, C1, E1, G1, H1, I1, K1 32 GB, 64 GB, 128 GB
CPU2: A1, B1, C1, E1, G1, H1, I1, K1
20 CPU1: A1, B1, C1, D1, E1, G1, H1, I1, J1, K1 32 GB, 64 GB, 128 GB
CPU2: A1, B1, C1, D1, E1, G1, H1, I1, J1, K1
24 Fill all slots 32 GB, 64 GB, 128 GB
Platform | Memory Configurations | 23
4
NUTANIX HARDWARE NAMING
CONVENTION
Every Nutanix block has a unique name based on the standard Nutanix naming convention.
The Nutanix hardware model name uses the format prefix-body-suffix.
For all Nutanix platforms, the prefix is NX to indicate that the platform is sold directly by Nutanix and
support calls are handled by Nutanix.
body uses the format ABCD | Y. The following table describes the body values.
Figure 9: Nutanix Hardware Naming Convention
Table 16: Body
Body Description
A Indicates the product series and is one of the following values.
• 1 – Entry-level & ROBO
• 3 – Balanced compute and storage
• 8 – High performance
• 9 – Accelerated system
Platform | Nutanix Hardware Naming Convention | 24
Body Description
B Indicates the number of nodes.
• For single-node platforms, B is always 1.
• For multi-node platforms, B can be 1, 2, 3, or 4.
Note: For multi-node platforms, the documentation always
uses a generic zero for B.
C Indicates the chassis form factor and is one of the following
values.
• 1 – 1U1N (one rack unit high with one node)
• 3 – 2U2N (two rack units high with two nodes)
• 5 – 2U1N (two rack units high with one node)
• 6 – 2U4N (two rack units high with four nodes)
• 7 – 1U1N (one rack unit high with one node)
D Indicates the drive form factor and is one of the following values.
• 0 – 2.5 inch drives
• 1 – E1.S drives
• 3 – E3.S drives
• 5 – 3.5 inch drives
Y Indicates platform types, and takes one of the following values:
• S – Single socket
• G – GPU (Not used in G9 since GPU is available on multiple
models)
Table 17: Suffix
Suffix Description
G5 The platform uses the Intel Broadwell CPU.
G6 The platform uses the Intel Skylake CPU.
G7 The platform uses the Intel Cascade Lake CPU.
G8 or N-G8 The platform uses the Intel Ice Lake CPU.
G9 The platform uses the Intel Sapphire Rapids or Emerald Rapids
CPU.
A-G9 The platform uses the AMD Genoa CPU.
Platform | Nutanix Hardware Naming Convention | 25
COPYRIGHT
Copyright 2024 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
variable_value The action depends on a value that is unique to your environment.
ncli> command The commands are executed in the Nutanix nCLI.
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
output The information is displayed as output from a command or in a log file.
Default Cluster Credentials
Interface Target Username Password
Nutanix web console Nutanix Controller VM admin Nutanix/4u
vSphere Web Client ESXi host root nutanix/4u
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console AHV host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
SSH client Nutanix Controller VM admin Nutanix/4u
Platform | Copyright | 26
Interface Target Username Password
SSH client or console Acropolis OpenStack root admin
Services VM (Nutanix
OVM)
Version
Last modified: December 4, 2024 (2024-12-04T22:24:21+05:30)
Platform | Copyright | 27