GGC Installation
GGC Installation
Installation Guide
2 Hardware Installation
2.1 You Will Need
2.2 Procedure
3.2 Procedure
4.2 Procedure
5 IP Addressing
5.1 IP Addressing Requirements for GGC Nodes
8.2 Procedure
9.2 Procedure
10.13 HPE Apollo 4200 (Cisco NCS 5001, more than 8 machines)
This document describes the installation and configuration process for a Google Global Cache (GGC)
node. Follow these instructions if you’re an ISP deploying GGC in your network.
1. Hardware installation
2. Network installation
3. IP addressing
4. Software installation
5. BGP configuration
Google may provide a networking device that connects to GGC machines. In some cases it’s managed
by the ISP, in others by Google. In this Installation Guide, all references to Google router mean a
networking device both provided and managed by Google.
Steps 2 and 5 have two scenarios, depending on whether the GGC node is connected to a Google router
or an ISP managed switch.
Follow only the steps applicable to the GGC node type you’re installing.
2
Hardware Installation
Follow these instructions to rack mount GGC machines. Some details vary depending on the type of
GGC hardware provided. See the appendices for detailed information.
You may rack mount GGC machines as soon as you receive them.
Dell and HPE machines : Rack mount installation kit and vendor specific instructions (shipped
with machines)
Network and power cabling, as listed in Appendix A - Cabling Requirements - Google Router and
Appendix B - Cabling Requirements - ISP Managed Switch
2.2 Procedure
1. HPE Apollo 4200 and Equus : Remove the shipping screws on either side of the chassis, following
vendor instructions. If you don’t remove the shipping screws you won’t be able to open the
chassis to maintain the mid-chassis mounted disks in the future.
GGC equipment is heavy. Use two people to lift it and follow appropriate health
and safety procedures to prevent injury and to avoid damage to the GGC
equipment.
NOTE: For Dell R740xd2 machines, to prevent possible injury, ensure that the thumbscrews
located on the front left and right control panels are fastened during racking, so that the machine
doesn’t slide out of the rack when you pull the front drive bay.
5. Verify that both Power Supply Units (PSUs) show green indicator lights.
For redundancy and performance reasons, you must use both PSUs. Google
strongly recommends that you connect each power supply to independent power
feeds. If a second feed isn’t available, you can connect both PSUs to the same
feed.
3
Network Installation - Google Router
Follow the instructions in this section if Google is providing and managing a GGC router to which the
GGC machines are connected. Otherwise, proceed to Network Installation - ISP Managed Switch.
Connect the GGC machines and the router to your network as soon as you receive them. Do this even if
you aren’t ready to install the GGC software or start GGC traffic.
The GGC router comes with preconfigured IP addressing on uplink interface(-s). LACP might be enabled
on the uplink interface(-s) depending on previously requested configuration as shown in GGC Router
Configuration section of the GGC Node Status page in the ISP portal
(https://isp.google.com/assets/config/).
Please verify that the IP addressing is what you intended, and are currently able to use. If
any IP addressing details need to be changed, please contact ggc@google.com
immediately, to avoid unnecessary turn-up delays. Always include the GGC node name in
communications with us.
Network diagram of a GGC deployment with GGC router: single interconnect to ISP network (left) and two interconnects (right)
For technical reason we can only support single interconnect on Dell S5248F and S5232F
GGC routers.
GGC router
Uplink connectivity components, including SFPs or QSFPs, single or multimode fibers, as required
for number of uplinks
3.2 Procedure
Details of cabling and network interfaces vary, depending on the GGC hardware type, the number of
machines in the node, and the type and number of uplinks in use. See Appendix A - Cabling
Requirements - Google Router.
2. Install SFPs and cabling as described in Appendix A - Cabling Requirements - Google Router.
Passive mode
Layer 2 mode
Check uplink physical status (link lights) on GGC router, and on your device.
Verify GGC router light levels (Tx and Rx) at your device.
Follow the instructions in this section if you’ll be providing all network connectivity for the GGC
machines.
You may connect the GGC machines to your network as soon as you receive them, even if you aren’t
ready to install the GGC software or start GGC traffic.
Switch or router
Install SFPs and cabling, as described in Appendix B - Cabling Requirements - ISP Managed Switch.
Maximum port speed (10Gbps for 10Gbps links, 1Gbps for 1Gbps links, etc.)
Full duplex
Auto-negotiation enabled
For GGC machines using a single interface, Link Aggregation Control Protocol (LACP) should be
disabled.
For GGC machines using multiple interfaces, LACP should be enabled, and configured as follows:
Passive mode
Layer 2 mode
Standalone mode (aggregated link should remain up, even if a physical port is down)
You may connect different GGC machines in the same node to different switches, but it isn’t required.
If you use multiple switches, the VLAN used by the GGC machines must span all switches involved.
Sample switch configurations are provided in Appendix C - Configuration Examples - ISP Managed
Switch. Refer to your switch vendor’s documentation for specific configuration commands.
5
IP Addressing
GGC nodes require a dedicated layer 3 subnet. You can configure nodes as dual-stacked (preferred),
IPv4 only, or IPv6 only.
For each IP protocol version supported by the GGC node, machines have the following addresses
assigned:
Maintenance IPs are configured statically. VIPs are managed automatically to ensure they move to
other machines during failures and machine maintenances. This minimizes disruption of traffic to the
users and ensures the BGP sessions with GGC nodes don’t remain down for extended periods.
5.1 IP Addressing Requirements for GGC Nodes
GGC nodes require a specific size of allocated public IP subnet, as shown in the table below.
IPv4 addresses are assigned within the GGC allocated subnet as follows:
You must use the 1st usable address in the subnet for the subnet gateway.
If required, assign the 2nd and 3rd addresses to an ISP-managed switch (e.g. for HSRP or GLBP).
The 4th and following IP addresses in the subnet are reserved for GGC machines (management
IPs and VIPs).
Check the Configuration tab of the node’s page in the ISP Portal
(https://isp.google.com/assets/config/) for the BGP peering IP address previously provided to
Google.
Please verify that the IP addressing is what you intended, and are currently able to use. If
any IP addressing details need to be changed, please contact ggc@google.com
immediately, to avoid unnecessary turn-up delays. Always include the GGC node name in
communications with us.
IPv6 addresses are assigned within the GGC allocated subnet as follows:
The ::1 address in the subnet is used as a statically addressed subnet gateway. If your device is
configured to send router advertisements, the GGC machines uses this in preference to the
static gateway.
If required, assign the ::2 and ::3 addresses to an ISP-managed switch (e.g. for HSRP or GLBP).
The ::4 and following IP addresses in the subnet are reserved for GGC machines (management
IPs and VIPs).
Check the Configuration tab of the node’s page in the ISP Portal
(https://isp.google.com/assets/config/) for the BGP peering IP address previously provided to
Google.
Please verify that the IP addressing is what you intended, and are currently able to use. If
any IP addressing details need to be changed, please contact ggc@google.com
immediately, to avoid unnecessary turn-up delays. Always include the GGC node name in
communications with us.
GGC nodes that are behind Google routers have additional subnet requirements because of
interconnects needed to connect them to your network.
Number of interconnects you can configure on a GGC router depends on its model:
We support up to 4 interconnects on each Cisco GGC Google router.
Each interconnect may be configured with either IPv4, IPv6 or both address types.
A /31 IPv4 subnet (or larger) is required for interconnects with standard configuration.
A /29 IPv4 subnet (or larger) is required for interconnects where redundancy protocol (HSRP,
VRRP, etc.) is used at the ISP’s side.
It’s up to the ISP to decide which IP from the interconnect allocated subnet is configured at either side
of the interconnect. Google doesn’t have any guidelines or preferences.
5.3 Enabling IPv6
You can enable IPv6 prior to installation by specifying the IPv6 subnet and IPv6 Router for BGP
Sessions when you supply the technical information required for node activation in the ISP Portal
(https://isp.google.com/assets/config/) asset pages.
Google strongly recommends you enable IPv6 for new nodes, even if you don’t yet have significant IPv6
user traffic, provided that IPv6 is globally reachable.
Please verify that your IPv6 implementation works properly, as connectivity issues will
significantly delay turn-up.
For an existing node that’s already serving IPv4 traffic, you can enable IPv6 through the ISP Portal
(https://isp.google.com/assets/config/) asset pages by following these steps:
If you’re enabling IPv6 support for a GGC node that’s connected to a Google router, you’ll
also need to provide the IPv6 subnet for the interconnect that’s configured with a IPv6
BGP session over it.
6
Software Installation (Network based)
Network based install is possible on GGC nodes that meet all of these requirements:
Cisco NCS 5001 or 5011
Dell R740xd, Dell R740xd2, Dell R7515, Dell R7615, HPE Apollo 4200 or Equus
hardware
6.1 Procedure
2. Wire the machines to the GGC router, following the cabling requirements in Appendix A exactly.
4. Wait for the GGC router to complete turn-up. You will receive an email message once this is
done. This message will be instructing you to boot the machines. Do this by pressing the front
power button. If the machines were already powered on, reboot them at this point.
6. No further action needed. The machines should perform an automated network-based install.
6.1.2 Dell R740xd2, Dell R7515, Dell R7615, HPE Apollo 4200 and Equus
2. Wire the machines to the GGC router, following the cabling requirements in Appendix A exactly.
5. No further action needed. All machines will attempt to boot of the network at regular intervals.
Once the GGC Router is fully provisioned and has established BGP session(s), all machines
should install over the network without intervention on your side.
6.2 GGC software network-based installation failures
Occasionally GGC network-based installation may fail to start. If this happens, please retry the
procedure above. If the failure persists, try the USB-drive installation process.
If the machine can’t establish network connectivity, check all cables are properly seated and connected
according to requirements in Appendix A.
7
Software Installation (USB-drive)
This section describes the steps to install the GGC software on the machine. After this step is
complete, the installer automatically signals Google to begin the turn-up process or to return this
machine to a serving state, in the case of a reinstallation.
5. Enter the network configuration and wait for the installer to complete.
It’s possible to install a GGC machine without Internet connectivity. In this case, the machine
repeatedly tries contacting GGC management systems until it succeeds.
A USB stick with at least 1 GB capacity. One USB stick per machine is provided.
Download, save, and run the tools required to create a bootable USB stick
GGC machines, mounted in a rack and powered up. A connection to the network with Internet
connectivity is preferred but not required.
You’ll also need to advertise the prefix allocated to the GGC node to your upstream networks.
7.2 Installation Procedure
Only the latest version of the setup image is supported. If you use an older installer version we may
ask you to reinstall the machine.
You’ll need a USB removable media device to store the GGC setup image. This can be the USB stick
shipped with GGC servers, or any other USB stick or portable USB drive with at least 1 GB capacity.
Create the USB boot stick on a computer on which you have the permissions described above. You can
create multiple boot sticks, to install machines in parallel, or you can make one USB stick and reuse it
for multiple machine installations.
For details on how to write the install image to the USB stick on various operating systems, see the ISP
Help Center (https://support.google.com/interconnect?p=usb).
The node’s IP addressing details, previously provided to Google, are shown on the Configuration tab of
the GGC Node’s page in the ISP portal (https://isp.google.com/assets/config/).
Please verify that the IP addressing is what you intended, and are currently able to use. If
any IP addressing details need to be changed, please contact ggc@google.com
immediately, to avoid unnecessary turn-up delays. Always include the GGC node name in
communications with us.
NOTE: For Dell R740xd2 machines, use the front ports when installing the GGC software.
If you’ve installed this machine before, it might not automatically boot from the USB
stick. If that happens, follow the steps at Booting the machine from the USB stick.
If there is issues with the machine registering USB devices, try alternative ports on the
opposing side of the machine e.g. use the front USB ports instead of the rear ports and
vice versa.
1. Press ENTER or wait for 10 seconds for the ‘Boot Menu’ to disappear. The machine boots up and
starts the installation program.
The installer examines the hardware. Some modifications applied by the installer may require the
machine to reboot.
If that happens, make sure the machine boots from the USB stick again. The installation program
then resumes.
2. The installer detects which network interface has a link. In this example, it’s the first 10GE
interface. It prompts you to configure it, as shown in this screen. Press ENTER to proceed:
NIC Detection
3. Respond to the prompts that appear on the screen. The configuration should match the
information provided to Google in the ISP Portal (https://isp.google.com/assets/config/):
NOTE: If you’ve used this USB stick to install any GGC machines before, the fields listed below
are pre-populated with previously used values. Verify they’re correct before pressing ENTER.
Enable LACP: Select ‘Y’ if the machine has multiple interfaces connected. Select ‘N’ if it
has only a single interface connected.
Enter the machine number: 1 for the first machine, 2 for the second, and so on.
IP Information
4. The installer validates IP information and connectivity, then begins software installation onto the
local hard drive. This step takes a couple of minutes. Allow it to finish.
NOTE: The installer may need to reboot the machine in order to reconfigure the disk controllers.
In this case please do not unplug the USB stick and restart the installation process after the
reboot.
5. When the installation process has completed successfully, it prompts you to press ENTER to
reboot the machine:
Successful installation
If any warnings or error messages are shown on the screen, don’t reboot the machine. See GGC
Software Installation Troubleshooting.
6. Remove the USB stick from the machine and press ENTER to reboot.
7. When the machine reboots after a successful installation, it boots from disk. The machine is
now ready for remote management. The monitor shows the Machine Health Screen:
Booted from disk
8. Label each machine with the name and IP address assigned to it.
If the installation didn’t go as described above, follow the steps in the next section, GGC
Software Installation Troubleshooting.
7.3 GGC Software Installation Troubleshooting
Occasionally GGC software installation may fail. This is usually due to either:
Hardware issues, which prevent the machine from booting, or that prevent the install image from
being written to disk
Network issues, which prevent the machine from connecting to GGC management systems
First, check machine hardware status by attaching a monitor, and viewing the Machine Health Screen
(https://support.google.com/interconnect/answer/9028258?hl=en&ref_topic=7658879). Further
information is available in the ISP Help Center (https://support.google.com/interconnect?p=usb).
If you can’t establish network connectivity, check the cables, switch and router configuration, and the IP
information you entered during installation. Check IP connectivity to the GGC machines from the GGC
switch, from elsewhere in your network, using external Looking Glass utilities.
If you’re installing a brand new machine for the first time and the setup process reports an error before
network connectivity is available, let ggc@google.com know what its service tag is when reporting the
problem.
If the setup process reports an error after network connectivity is established, it automatically uploads
logs to Google for investigation. If this happens, leave the machine running with the USB stick inserted
so we can gather additional diagnostics, if required.
Don’t place transparent proxies, NAT devices, or filters in the path of communications between the GGC
Node and the internet, subject to the explicit exceptions in the GGC Agreement. Any filtering of traffic
to or from GGC machines is likely to block remote management of machines. This delays turn-up.
In other cases, contact the GGC Operations team: ggc@google.com. Always include the GGC node
name in communications with us.
NOTE: Photos of the fault, including those of the installer and Machine Health Screen, are useful for
troubleshooting.
7.4 Boot the Machine from the USB Stick
1. Insert the USB stick and reboot the machine. During POST (Power On Self Test phase of PC
boot), this menu appears on the screen:
4. From the list of bootable devices, select ‘Hard disk C:’ and then the USB stick, as shown in this
screen:
1. Insert the USB stick and reboot the machine. During POST (Power On Self Test phase of PC
boot), this menu appears on the screen:
HPE POST screen
It may take about two minutes before the machine shows any VGA output.
Choose the option that shows just the name of the USB stick. Don’t choose any option containing
“sSATA” or “UEFI”.
Wait for the machine to finish booting from the USB stick.
8
BGP Configuration - Google Router
In this scenario you establish BGP sessions with the Google-provided router. One session per each
interconnect and IP protocol version is required.
8.2 Procedure
You can configure your BGP router at any time. The session doesn’t come up until Google completes
the next steps of the installation. Establishing a BGP session with a Google router and advertising
prefixes doesn’t cause traffic to flow. We’ll contact you to arrange a date and time to start traffic.
We recommend at least one BGP session between your BGP peer(s) and the Google-provided router.
We’ll contact you if we have problems establishing the BGP session, or if we detect problems with the
prefixes advertised.
If you’re planning to perform work that may adversely affect GGC nodes, don’t disable
BGP sessions. Instead, schedule maintenance for it in the ISP Portal
(https://isp.google.com/assets/config/).
9
BGP Configuration - ISP Managed Switch
In this scenario you establish a BGP session directly with the GGC node.
BGP configuration details for this node from ISP Portal (https://isp.google.com/assets/config/)
9.2 Procedure
You can configure your BGP router at any time. The session doesn’t come up until Google completes
the next steps of the installation. Establishing a BGP session with a new GGC node and advertising
prefixes doesn’t cause traffic to flow. We’ll contact you to arrange a date and time to start traffic.
It’s important to understand that the BGP session established with GGC node isn’t used for traditional
routing purposes. This has two implications:
Only a single BGP session (for IPv4 and IPv6 each) with a GGC node is supported and necessary.
We’ll contact you if we have problems establishing the BGP session, or if we detect problems with the
prefixes advertised.
Disabling the BGP session doesn’t stop the node from serving traffic. Our systems
continue to use the most recently received BGP feed when the session isn’t established.
If you’re planning to perform work that may adversely affect GGC nodes, schedule
maintenance for it in the ISP Portal (https://isp.google.com/assets/config/).
10
Appendix A - Cabling Requirements - Google Router
10.1 Dell R730xd (Cisco NCS 5001)
1 x 1G copper SFP
1 x multi-mode fiber
10.1.1 Procedure
If you’re using 10G SFPs, insert SFPs into the GGC router, starting from interface #30.
If you’re using 100G QSFP28s, insert QSFP28s into the GGC router, starting from interface
#1/0.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
3. Insert 10G SFPs into machine network interface “Port 1”, as shown below.
Second machine 10G interface “Port 1” to router interface #1, and so on.
R730 Cabling (Google)
4 x 10G SR SFPs: 2 for the machine, 2 for the GGC router or 2 x 10G SFP+ direct attach cables
10.2.1 Procedure
If you’re using 10G SFPs, insert up to 16 10G SFPs into the GGC router, starting from
interface #24.
If you’re using 100G QSFP28s, insert up to 2 100G QSFP28s into the GGC router, starting
from interface #1/0.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
2. Insert machine-facing 10G SFPs into GGC router, starting from interface #0.
4 x 10G SR SFPs: 2 for the machine, 2 for the switch or 2 x 10G SFP+ direct attach cables
10.3.1 Procedure
Insert up to 2 100G QSFP28s into GGC router, starting from interface #1/0. 10G uplinks
aren’t supported.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
2. Insert machine-facing 10G SFPs into GGC router, starting from interface #0.
4 x 10G SR SFPs: 2 for the machine, 2 for the GGC router or 2 x 10G SFP+ direct attach cables
10.4.1 Procedure
If you’re using 10G SFPs, insert up to 16 10G SFPs into the GGC router, starting from
interface #33.
If you’re using 100G QSFP28s, insert up to 6 100G QSFP28s into the GGC router, starting
from interface #49.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
2. Insert machine-facing 10G SFPs into GGC router, starting from interface #1.
10.5.1 Procedure
Insert up to 10 100G QSFP28s into the GGC router, starting from interface #23.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
First machine 50G interface (#1 in the diagram) to GGC router interface #1.
Second machine 50G interface (#1 in the diagram) to GGC router interface #2, and so on.
10.6.1 Procedure
Insert up to 10 100G QSFP28s into the GGC router, starting from interface #23.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
First machine 50G interface (#1 in the diagram) to GGC router interface #1.
Second machine 50G interface (#1 in the diagram) to GGC router interface #2, and so on.
R740xd2 Cabling
10.7.1 Procedure
Insert up to 10 100G QSFP28s into the GGC router, starting from interface #23.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
First machine 50G interface (#1 in the diagram, Port 1, LHS) to GGC router interface #1.
Second machine 50G interface (#1 in the diagram) to GGC router interface #2, and so on.
10.8.1 Procedure
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
R7515/R7615/R740xd2 Uplinks
First machine 50G interface (#1 in the diagram) to GGC router interface #0
Second machine 50G interface (#1 in the diagram) to GGC router interface #1, and so on.
10.9.1 Procedure
Insert up to 10 100G QSFP28s into the GGC router, starting from interface #23.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
First machine interface (outlined in yellow in the image below) to GGC router interface #1.
Second machine interface (outlined in yellow in the image below) to GGC router interface
#2, and so on.
10.10.1 Procedure
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
R7515/R7615/R740xd2 Uplinks
First machine 50G interface (#1 in the diagram) to GGC router interface #0
Second machine 50G interface (#1 in the diagram) to GGC router interface #1, and so on.
10.11.1 Procedure
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
R7515/R7615/R740xd2 Uplinks
First machine 50G interface (#1 in the diagram) to GGC router interface #0
Second machine 50G interface (#1 in the diagram) to GGC router interface #1, and so on.
R740xd2 Cabling
4 x 10G SR SFPs: 2 for the machine, 2 for the GGC router or 2 x 10G SFP+ direct attach cables
10.12.1 Procedure
If you’re using 10G SFPs, insert SFPs into the GGC router, starting from interface #24.
If you’re using 100G QSFP28s, insert QSFP28s into the GGC router, starting from interface
#1/0.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
3. Insert 10G SFPs into machine network interfaces #1 and #2, as shown.
4. Connect machines to GGC router with fiber or direct attach cables, as shown:
Second machine 10G interfaces #1, #2 to router interfaces #2, #3, and so on.
4 x 10G SR SFPs: 2 for the machine, 2 for the GGC router or 2 x 10G SFP+ direct attach cables
Additional 100G QSFP28s for uplink to your network (10G SFP uplinks aren’t supported)
10.13.1 Procedure
Insert 100G QSFP28s into the GGC router starting from interface #1/0. 10G uplinks aren’t
supported.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
3. Insert 10G SFPs into machine network interfaces #1 and #2, as shown.
4. Connect machines to GGC router with fiber or direct attach cables, as shown:
Second machine 10G interfaces #1, #2 to router interfaces #2, #3, and so on.
10.14.1 Procedure
Insert 100G QSFP28s into the GGC router starting from interface #24. 10G uplinks aren’t
supported.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
First machine 40G interface (#2 in the diagram) to GGC router interface #0
Second machine 40G interface (#2 in the diagram) to GGC router interface #1, and so on.
10.15.1 Procedure
Insert 100G QSFP28s into the GGC router starting from interface #24. 10G uplinks aren’t
supported.
Or use specific GGC router interfaces listed in “Interconnect Port Details” section on the
ISP portal GGC node page.
2. Connect machines to the GGC router with direct attach cable, as shown:
First machine network interface (outlined in yellow in the image below) to GGC router
interface #0
Second machine network interface (outlined in yellow in the image below) to GGC router
interface #1, and so on
11.1.1 Procedure
Connect machine RJ45 network interfaces, indicated by #1 and #2, to your switch.
11.2.1 Procedure
Connect machine RJ45 network interfaces #1 and #2, as shown, to your switch.
R430 Cabling
11.3 Dell R440
11.3.1 Procedure
Connect machine RJ45 network interfaces #1 and #2, as shown, to your switch.
R440 Cabling
2 x 10G SFPs (SR or LR): 1 for the machine, 1 for the switch
11.4.1 Procedure
4 x 10G SFPs (SR or LR); 2 for the machine, 2 for the router
11.5.1 Procedure
1. Insert two SFPs into machine network interfaces #1 and #2, as shown.
5. On your switch add two ports facing each machine into separate LACP bundle.
2 x 10G SFPs (SR or LR): 1 for the machine, 1 for the switch
11.6.1 Procedure
These examples are for illustration only; your configuration may vary. Contact your switch vendor for
detailed configuration support for your specific equipment.
Replace interface descriptions mynode-abc101 with the name of the GGC node and the machine
number.
interface TenGigE0/0/0/0
description mynode-abc101
l2transport
!
interface TenGigE0/0/0/1
description mynode-abc102
l2transport
!
12.2 Cisco IOS XR Interface Configuration (LACP Enabled)
Replace interface descriptions mynode-abc101-port1 with the name of the GGC node, the machine
number, and machine interface name.
bundle lacp-fallback timeout 5 under interface Bundle-EtherX is
important - it allows us to connect to machines’ BMC (iLO, iDRAC, etc) even when they
don’t have operating system (and therefore LACP) running. This will decrease number of
requests for assistance.
interface Bundle-Ether1
description mynode-abc101
bundle lacp-fallback timeout 5
l2transport
!
!
interface TenGigE0/0/0/0
description mynode-abc101-port1
bundle id 1 mode active
!
interface TenGigE0/0/0/1
description mynode-abc101-port2
bundle id 1 mode active
!
interface Bundle-Ether2
description mynode-abc102
bundle lacp-fallback timeout 5
l2transport
!
!
interface TenGigE0/0/0/2
description mynode-abc102-port1
bundle id 2 mode active
!
interface TenGigE0/0/0/3
description mynode-abc102-port2
bundle id 2 mode active
!
12.3 Cisco Switch Interface Configuration (LACP Disabled)
Replace interface descriptions mynode-abc101 with the name of the GGC node and the machine
number.
!
interface TenGigabitEthernet1/1
description mynode-abc101
switchport mode access
flowcontrol send off
spanning-tree portfast
!
interface TenGigabitEthernet1/2
description mynode-abc102
switchport mode access
flowcontrol send off
spanning-tree portfast
!
interface TenGigabitEthernet1/3
description mynode-abc103
switchport mode access
flowcontrol send off
spanning-tree portfast
!
interface TenGigabitEthernet1/4
description mynode-abc104
switchport mode access
flowcontrol send off
spanning-tree portfast
!
end
12.4 Cisco Switch Interface Configuration (LACP Enabled)
Replace interface descriptions mynode-abc101-Gb1 with the name of the GGC node, the machine
number, and machine interface name.
no port-channel standalone-disable under interface Port-channelX is
important - it allows connecting to machines’ BMC (iLO, iDRAC, etc) even when they
don’t have operating system (and therefore LACP) running. This will decrease number of
requests for assistance.
!
interface GigabitEthernet1/1
description mynode-abc101-Gb1
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 1 mode passive
spanning-tree portfast
!
interface GigabitEthernet1/2
description mynode-abc101-Gb2
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 1 mode passive
spanning-tree portfast
!
interface Port-channel1
description mynode-abc101
switchport
switchport mode access
no port-channel standalone-disable
spanning-tree portfast
!
interface GigabitEthernet1/3
description mynode-abc102-Gb1
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 2 mode passive
spanning-tree portfast
!
interface GigabitEthernet1/4
description mynode-abc102-Gb2
switchport mode access
flowcontrol send off
channel-protocol lacp
channel-group 2 mode passive
spanning-tree portfast
!
interface Port-channel2
description mynode-abc102
switchport
switchport mode access
no port-channel standalone-disable
spanning-tree portfast
end
12.5 Juniper Switch Interface Configuration (LACP Disabled)
Replace interface descriptions mynode-abc101 with the name of the GGC node and the machine
number.
Replace interface descriptions mynode-abc101-Xe1 with the name of the GGC node, the machine
number, and interface name on your switch.
Replace interface descriptions mynode-abc101-port1 with the name of the GGC node, the machine
number, and machine interface name.
mode lacp-dynamic under interface Eth-TrunkX is important - it allows
connecting to machines’ BMC (iLO, iDRAC, etc) even when they don’t have operating
system (and therefore LACP) running. This will decrease number of requests for
assistance.
interface Eth-Trunk1
description mynode-abc101
stp disable
mode lacp-dynamic
interface Eth-Trunk2
description mynode-abc102
stp disable
mode lacp-dynamic
policy-statement no-routes {
term default {
then reject;
}
}
12.11 Juniper BGP Configuration (AS-PATH Based Policy)
Physical Dimensions
Hardware Height Width Depth Weight
Dell R240 1U 19" rack mount 610mm (24") 12.2kg (26.89lb)
Dell R250 1U 19" rack mount 610mm (24") 12.2kg (26.89lb)
Dell R430xd 1U 19" rack mount 610mm (24") 19.9kg (44lb)
Dell R440 1U 19" rack mount 686mm (27") 17.5kg (38.6lb)
Dell R730xd 2U 19" rack mount 686mm (27") 32.5kg (72lb)
Dell R740xd 2U 19" rack mount 711mm (28") 33.1kg (73lb)
Dell R740xd2 2U 19" rack mount 813mm (32") 43.2kg (95.2lb)
Dell R7515 2U 19" rack mount 682mm (27") 27.3kg (60.2lb)
Dell R7615 2U 19" rack mount 759mm (29.8") 34.5kg (76.05lb)
HPE Apollo 4200 2U 19" rack mount 813mm (32") 50kg (110.2lb)
Equus 2U 19" rack mount 876mm (34.5") 46.3kg (102lb)
Cisco NCS 5001 1U 19" rack mount 483mm (19") 9.3kg (20.5lb)
Cisco NCS 5011 1U 19" rack mount 572mm (22.5") 10kg (22.2lb)
Dell S5248F 1U 19" rack mount 460mm (18.1") 9.7kg (21.4lb)
Dell S5232F 1U 19" rack mount 460mm (18.1") 9.8kg (21.6lb)
Dimensions are vendors data, rounded to the nearest half or whole inch. Please note that mounting
rails, and cable-management hardware (if used) can add 3" (76mm) to 6" (152mm) to the overall depth
required.
Required operating temperature is 10°C to 35°C (50°F to 95°F), at up to 80% non-condensing humidity.
You might see exhaust temperatures of up to 50°C (122°F); this is normal.
See Appendix F - Vendor Hardware Documentation for further environmental and mechanical
specifications.
14
Appendix E - Power Requirements
Each machine requires dual power feeds. Machines are shipped with either 110V/240V PSUs, or 50V
DC power PSUs.
Google doesn’t provide all power cord types. If your facility requires other types of power cords and
they’re not listed in the table below, you’ll need to provide them.
Google doesn’t provide DC power cords for machines; have your electrician connect machines to DC
power.
See Appendix F - Vendor Hardware Documentation for further electrical specification information.
15
Appendix F - Vendor Hardware Documentation
If you have further questions related to hardware delivered by Google and can’t find answers in this
document, consult the appropriate resource listed below.
2. Dell estimated maximum potential instantaneous power draw of the product under maximum
usage conditions. Should only be used for sizing the circuit breaker. Peak power values for other
platforms were sourced in-house by running synthetic stress test.↩︎