This page provides instructions for creating internal passthrough Network Load Balancers to load balance traffic for multiple protocols.
To configure a load balancer for multiple protocols, including TCP and UDP, you
create a forwarding rule with the protocol set to
L3_DEFAULT
. This
forwarding rule points to a backend service with the protocol set to
UNSPECIFIED
.
In this example, we use one internal passthrough Network Load Balancer to distribute traffic across a backend
VM in the us-west1
region. The load balancer has a forwarding rule with protocol
L3_DEFAULT
to handle
TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE
.
Before you begin
- Install the Google Cloud CLI. For a complete overview of the tool,
see the gcloud CLI overview. You can
find commands related to load balancing in the
API and gcloud CLI references.
If you haven't run the gcloud CLI previously, first run
the
gcloud init
command to authenticate. - Learn about bash.
Permissions
To get the permissions that you need to complete this guide, ask your administrator to grant you the following IAM roles on the project:
-
To create load balancer resources:
Compute Load Balancer Admin (
roles/compute.loadBalancerAdmin
) -
To create Compute Engine instances and instance groups:
Compute Instance Admin (
roles/compute.instanceAdmin.v1
) -
To create networking components:
Compute Network Admin (
roles/compute.networkAdmin
)
For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Set up load balancer for L3_DEFAULT traffic
The steps in this section describe the following configurations:
- An example that uses a custom mode VPC
network named
lb-network
. You can use an auto mode network if you only want to handle IPv4 traffic. However, IPv6 traffic requires a custom mode subnet. - A single-stack subnet (
stack-type
set toIPv4
), which is required for IPv4 traffic. When you create a single-stack subnet on a custom mode VPC network, you choose an IPv4 subnet range for the subnet. For IPv6 traffic, we require a dual-stack subnet (stack-type
set toIPv4_IPv6
). When you create a dual stack subnet on a custom mode VPC network, you choose an IPv6 access type for the subnet. For this example, we set the subnet'sipv6-access-type
parameter toINTERNAL
. This means new VMs on this subnet can be assigned both internal IPv4 addresses and internal IPv6 addresses. - Firewall rules that allow incoming connections to backend VMs.
- The backend instance group and the load balancer components used for this
example are located in this region and subnet:
- Region:
us-west1
- Subnet:
lb-subnet
, with primary IPv4 address range10.1.2.0/24
. Although you choose which IPv4 address range is configured on the subnet, the IPv6 address range is assigned automatically. Google provides a fixed size (/64) IPv6 CIDR block.
- Region:
- A backend VM in a managed instance group in zone
us-west1-a
. - A client VM to test connections to the backends.
- An internal passthrough Network Load Balancer with the following components:
- A health check for the backend service.
- A backend service in the
us-west1
region with the protocol set toUNSPECIFIED
to manage connection distribution to the zonal instance group. - A forwarding rule with the protocol set to
L3_DEFAULT
and the port set toALL
.
Configure a network, region, and subnet
To configure subnets with internal IPv6 ranges, enable a Virtual Private Cloud (VPC) network ULA internal IPv6 range. Internal IPv6 subnet ranges are allocated from this range. To create the example network and subnet, follow these steps:
Console
To support both IPv4 and IPv6 traffic, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter
lb-network
.If you want to configure internal IPv6 address ranges on subnets in this network, complete these steps:
- For VPC network ULA internal IPv6 range, select Enabled.
- For Allocate internal IPv6 range, select Automatically or Manually.
For Subnet creation mode, select Custom.
In the New subnet section, specify the following configuration parameters for a subnet:
- For Name, enter
lb-subnet
. - For Region, select
us-west1
. - To create a dual-stack subnet, for IP stack type, select IPv4 and IPv6 (dual-stack).
- For IPv4 range, enter
10.1.2.0/24
. - For IPv6 access type, select Internal.
- For Name, enter
Click Done.
Click Create.
To support IPv4 traffic, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter
lb-network
.In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Name:
lb-subnet
- Region:
us-west1
- IP stack type: IPv4 (single-stack)
- IP address range:
10.1.2.0/24
- Name:
- Click Done.
Click Create.
gcloud
For both IPv4 and IPv6 traffic, use the following commands:
To create a new custom mode VPC network, run the
gcloud compute networks create
command.To configure internal IPv6 ranges on any subnets in this network, use the
--enable-ula-internal-ipv6
flag. This option assigns a/48
ULA prefix from within thefd20::/20
range used by Google Cloud for internal IPv6 subnet ranges.gcloud compute networks create lb-network \ --subnet-mode=custom \ --enable-ula-internal-ipv6
Within the
lb-network
, create a subnet for backends in theus-west1
region.To create the subnets, run the
gcloud compute networks subnets create
command:gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1 \ --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL
For IPv4 traffic only, use the following commands:
To create the custom VPC network, use the
gcloud compute networks create
command:gcloud compute networks create lb-network --subnet-mode=custom
To create the subnet for backends in the
us-west1
region within thelb-network
network, use thegcloud compute networks subnets create
command.gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1
API
For both IPv4 and IPv6 traffic, use the following commands:
Create a new custom mode VPC network. Make a
POST
request to thenetworks.insert
method.To configure internal IPv6 ranges on any subnets in this network, set
enableUlaInternalIpv6
totrue
. This option assigns a/48
range from within thefd20::/20
range used by Google for internal IPv6 subnet ranges.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "autoCreateSubnetworks": false, "name": "lb-network", "mtu": MTU, "enableUlaInternalIpv6": true, }
Replace the following:
PROJECT_ID
: the ID of the project where the VPC network is created.MTU
: the maximum transmission unit of the network. MTU can either be1460
(default) or1500
. Review the maximum transmission unit overview before setting the MTU to1500
.
Make a
POST
request to thesubnetworks.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "ipCidrRange": "10.1.2.0/24", "network": "lb-network", "name": "lb-subnet" "stackType": IPV4_IPV6, "ipv6AccessType": Internal }
For IPv4 traffic only, use the following steps:
Make a
POST
request to thenetworks.insert
method. ReplacePROJECT_ID
with the ID of your Google Cloud project.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "name": "lb-network", "autoCreateSubnetworks": false }
Make two
POST
requests to thesubnetworks.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks { "name": "lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.1.2.0/24", "privateIpGoogleAccess": false }
Configure firewall rules
This example uses the following firewall rules:
fw-allow-lb-access
: An ingress rule, applicable to all targets in the VPC network, that allows traffic from sources in the10.1.2.0/24
ranges. This rule allows incoming traffic from any client located in the subnet.fw-allow-lb-access-ipv6
: An ingress rule, applicable to all targets in the VPC network, that allows traffic from sources in the IPv6 range configured in the subnet. This rule allows incoming IPv6 traffic from any client located in the subnet.fw-allow-ssh
: An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule—for example, you can specify only the IP ranges of the system from which you are initiating SSH sessions. This example uses the target tagallow-ssh
to identify the VMs to which it should apply.fw-allow-health-check
: An ingress rule, applicable to the instances being load balanced, that allows traffic from the Google Cloud health checking systems (130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagallow-health-check
to identify the instances to which it should apply.fw-allow-health-check-ipv6
: An ingress rule, applicable to the instances being load balanced, that allows traffic from the Google Cloud health checking systems (2600:2d00:1:b029::/64
). This example uses the target tagallow-health-check-ipv6
to identify the instances to which it should apply.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
Console
In the Google Cloud console, go to the Firewall policies page.
To allow IPv4 TCP, UDP, and ICMP traffic to reach backend instance group
ig-a
:- Click Create firewall rule.
- Name:
fw-allow-lb-access
- Network:
lb-network
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: All instances in the network
- Source filter: IPv4 ranges
- Source IPv4 ranges:
10.1.2.0/24
- Protocols and ports: select Specified protocols and ports.
- Select TCP and enter
ALL
. - Select UDP.
- Select Other and enter
ICMP
.
- Select TCP and enter
Click Create.
To allow incoming SSH connections:
- Click Create firewall rule.
- Name:
fw-allow-ssh
- Network:
lb-network
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0
- Protocols and ports: choose Specified protocols and ports,
and then type
tcp:22
.
Click Create.
To allow IPv6 TCP, UDP, and ICMP traffic to reach backend instance group
ig-a
:- Click Create firewall rule.
- Name:
fw-allow-lb-access-ipv6
- Network:
lb-network
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: All instances in the network
- Source filter: IPv6 ranges
- Source IPv6 ranges: IPV6_ADDRESS assigned in the
lb-subnet
- Protocols and ports: select Specified protocols and ports.
- Select TCP and enter
0-65535
. - Select UDP.
- Select Other and for ICMPv6 protocol enter
58
.
- Select TCP and enter
Click Create.
To allow Google Cloud IPv6 health checks:
- Click Create firewall rule.
- Name:
fw-allow-health-check-ipv6
- Network:
lb-network
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-health-check-ipv6
- Source filter: IPv6 ranges
- Source IPv6 ranges:
2600:2d00:1:b029::/64
- Protocols and ports: Allow all
Click Create.
To allow Google Cloud IPv4 health checks:
- Click Create firewall rule
- Name:
fw-allow-health-check
- Network:
lb-network
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-health-check
- Source filter: IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22
and35.191.0.0/16
- Protocols and ports: Allow all
Click Create.
gcloud
To allow IPv4 TCP traffic to reach backend instance group
ig-a
, create the following rule:gcloud compute firewall-rules create fw-allow-lb-access \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.1.2.0/24 \ --rules=tcp,udp,icmp
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs by using the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
To allow IPv6 traffic to reach backend instance group
ig-a
, create the following rule:gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=IPV6_ADDRESS \ --rules=all
Replace
IPV6_ADDRESS
with the IPv6 address assigned in thelb-subnet
.Create the
fw-allow-health-check
firewall rule to allow Google Cloud health checks.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
Create the
fw-allow-health-check-ipv6
rule to allow Google Cloud IPv6 health checks.gcloud compute firewall-rules create fw-allow-health-check-ipv6 \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64 \ --rules=tcp,udp,icmp
API
To create the
fw-allow-lb-access
firewall rule, make aPOST
request to thefirewalls.insert
method. ReplacePROJECT_ID
with the ID of your Google Cloud project.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-lb-access", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "10.1.2.0/24" ], "allPorts": true, "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Create the
fw-allow-lb-access-ipv6
firewall rule by making aPOST
request to thefirewalls.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-lb-access-ipv6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "IPV6_ADDRESS" ], "allPorts": true, "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "58" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Replace IPV6_ADDRESS with the IPv6 address assigned in the
lb-subnet
.To create the
fw-allow-ssh
firewall rule, make aPOST
request to thefirewalls.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-ssh", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "0.0.0.0/0" ], "targetTags": [ "allow-ssh" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
To create the
fw-allow-health-check
firewall rule, make aPOST
request to thefirewalls.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-health-check", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16" ], "targetTags": [ "allow-health-check" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Create the
fw-allow-health-check-ipv6
firewall rule by making aPOST
request to thefirewalls.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-health-check-ipv6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "2600:2d00:1:b029::/64" ], "targetTags": [ "allow-health-check-ipv6" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Create backend VMs and instance groups
For this load balancing scenario, you create a Compute Engine zonal managed instance group and install an Apache web server.
To handle both IPv4 and IPv6 traffic, configure the backend VMs to be
dual-stack. Set the VM's stack-type
to IPv4_IPv6
. The VMs also inherit the
ipv6-access-type
setting (in this example, INTERNAL
) from the subnet. For
more details about IPv6 requirements, see the Internal passthrough Network Load Balancer overview:
Forwarding rules.
If you want to use existing VMs as backends, update the VMs to be dual-stack by using the gcloud compute instances network-interfaces update command.
Instances that participate as backend VMs for internal passthrough Network Load Balancers must be running the appropriate Linux Guest Environment, Windows Guest Environment, or other processes that provide equivalent functionality.
For instructional simplicity, the backend VMs run Debian GNU/Linux 10.
Create the instance group
Console
To support both IPv4 and IPv6 traffic, use the following steps:
Create an instance template. In the Google Cloud console, go to the Instance templates page.
- Click Create instance template.
- For the Name, enter
vm-a1
. - Ensure that the Boot disk is set to a Debian image, such as
Debian GNU/Linux 12 (bookworm). These instructions use commands that
are only available on Debian, such as
apt-get
. - Expand the Advanced options section.
Expand the Management section, and then copy the following script into the Startup script field. The startup script also configures the Apache server to listen on port
8080
instead of port80
.#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.conf systemctl restart apache2
Expand the Networking section, and then specify the following:
- For Network tags, add
allow-ssh
andallow-health-check-ipv6
. - For Network interfaces, click the default interface and
configure the following fields:
- Network:
lb-network
- Subnetwork:
lb-subnet
- IP stack type: IPv4 and IPv6 (dual-stack)
- Network:
- For Network tags, add
Click Create.
To support IPv4 traffic, use the following steps:
Create an instance template. In the Google Cloud console, go to the Instance templates page.
Click Create instance template.
- For the Name, enter
vm-a1
. - Ensure that the Boot disk is set to a Debian image, such as
Debian GNU/Linux 12 (bookworm). These instructions use commands that
are only available on Debian, such as
apt-get
. - Expand the Advanced options section.
Expand the Management section, and then copy the following script into the Startup script field. The startup script also configures the Apache server to listen on port
8080
instead of port80
.#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.conf systemctl restart apache2
Expand the Networking section, and then specify the following:
- For Network tags, add
allow-ssh
andallow-health-check
. - For Network interfaces, click the default interface and
configure the following fields:
- Network:
lb-network
- Subnetwork:
lb-subnet
- IP stack type: IPv4 (single-stack)
- Network:
- For Network tags, add
Click Create.
- For the Name, enter
Create a managed instance group. Go to the Instance groups page in the Google Cloud console.
- Click Create instance group.
- Choose New managed instance group (stateless). For more information, see Stateless or stateful MIGs.
- For the Name, enter
ig-a
. - For Location, select Single zone.
- For the Region, select
us-west1
. - For the Zone, select
us-west1-a
. - For Instance template, select
vm-a1
. Specify the number of instances that you want to create in the group.
For this example, specify the following options under Autoscaling:
- For Autoscaling mode, select
Off:do not autoscale
. - For Maximum number of instances, enter
2
.
- For Autoscaling mode, select
Click Create.
gcloud
The gcloud
instructions in this guide assume that you are using Cloud
Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates create
command.The startup script also configures the Apache server to listen on port
8080
instead of port80
.To handle both IPv4 and IPv6 traffic, use the following command.
gcloud compute instance-templates create vm-a1 \ --region=us-west1 \ --network=lb-network \ --subnet=lb-subnet \ --ipv6-network-tier=PREMIUM \ --stack-type=IPv4_IPv6 \ --tags=allow-ssh \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf systemctl restart apache2'
Or, if you want to handle IPv4 traffic only, use the following command.
gcloud compute instance-templates create vm-a1 \ --region=us-west1 \ --network=lb-network \ --subnet=lb-subnet \ --tags=allow-ssh \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf systemctl restart apache2'
Create a managed instance group in the zone with the
gcloud compute instance-groups managed create
command.gcloud compute instance-groups managed create ig-a \ --zone us-west1-a \ --size 2 \ --template vm-a1
api
To handle both IPv4 and IPv6 traffic, use the following steps:.
Create a VM by making
POST
requests to theinstances.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "vm-a1", "tags": { "items": [ "allow-health-check-ipv6", "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4_IPV6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-a1", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME", "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "metadata": { "items": [ { "key": "startup-script", "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname="$(curl -H "Metadata-Flavor:Google" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)"\necho "Page served from: $vm_hostname" | \\\ntee /var/www/html/index.html\nsed -ire "s/^Listen 80$/Listen 8080/g" /etc/\\napache2/ports.conf\nsystemctl restart apache2" } ] }, "scheduling": { "preemptible": false }, "deletionProtection": false }
To handle IPv4 traffic, use the following steps.
Create a VM by making
POST
requests to theinstances.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "vm-a1", "tags": { "items": [ "allow-health-check", "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-a1", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME", "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "metadata": { "items": [ { "key": "startup-script", "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname="$(curl -H "Metadata-Flavor:Google" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)"\necho "Page served from: $vm_hostname" | \\\ntee /var/www/html/index.html\nsed -ire "s/^Listen 80$/Listen 8080/g" /etc/\\napache2/ports.conf\nsystemctl restart apache2" } ] }, "scheduling": { "preemptible": false }, "deletionProtection": false }
Create an instance group by making a
POST
request to theinstanceGroups.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups { "name": "ig-a", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet" }
Add instances to each instance group by making a
POST
request to theinstanceGroups.addInstances
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances { "instances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1" } ] }
Create a client VM
This example creates a client VM in the same region as the backend (server) VMs. The client is used to validate the load balancer's configuration and demonstrate expected behavior as described in the testing section.
For IPv4 and IPv6 traffic:
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set the Name to
vm-client-ipv6
.Set the Zone to
us-west1-a
.Expand the Advanced options section, and then make the following changes:
- Expand Networking, and then add the
allow-ssh
to Network tags. - Under Network interfaces, click Edit, make the
following changes, and then click Done:
- Network:
lb-network
- Subnet:
lb-subnet
- IP stack type: IPv4 and IPv6 (dual-stack)
- Primary internal IP: Ephemeral (automatic)
- External IP: Ephemeral
- Network:
- Expand Networking, and then add the
Click Create.
gcloud
The client VM can be in any zone in the same region as the
load balancer, and it can use any subnet in that region. In this example,
the client is in the us-west1-a
zone, and it uses the same
subnet as the backend VMs.
gcloud compute instances create vm-client-ipv6 \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --stack-type=IPV4_IPV6 \ --tags=allow-ssh \ --subnet=lb-subnet
api
Make a POST
request to the
instances.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances { "name": "vm-client-ipv6", "tags": { "items": [ "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4_IPV6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "scheduling": { "preemptible": false }, "deletionProtection": false }
For IPv4 traffic:
Console
In the Google Cloud console, go to the VM instances page.
Click Create instance.
For Name, enter
vm-client
.For Zone, enter
us-west1-a
.Expand the Advanced options section.
Expand Networking, and then configure the following fields:
- For Network tags, enter
allow-ssh
. - For Network interfaces, select the following:
- Network:
lb-network
- Subnet:
lb-subnet
- Network:
- For Network tags, enter
Click Create.
gcloud
The client VM can be in any zone in the same region as the
load balancer, and it can use any subnet in that region. In this example,
the client is in the us-west1-a
zone, and it uses the same
subnet as the backend VMs.
gcloud compute instances create vm-client \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=lb-subnet
API
Make a POST
request to the
instances.insert
method.
Replace PROJECT_ID
with the ID of your
Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances { "name": "vm-client", "tags": { "items": [ "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "scheduling": { "preemptible": false }, "deletionProtection": false }
Configure load balancer components
Create a load balancer for multiple protocols.
gcloud
Create an HTTP health check for port 80. This health check is used to verify the health of backends in the
ig-a
instance group.gcloud compute health-checks create http hc-http-80 \ --region=us-west1 \ --port=80
Create the backend service with the protocol set to
UNSPECIFIED
:gcloud compute backend-services create be-ilb-l3-default \ --load-balancing-scheme=internal \ --protocol=UNSPECIFIED \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the instance group to the backend service:
gcloud compute backend-services add-backend be-ilb-l3-default \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
For IPv6 traffic: Create a forwarding rule with the protocol set to
L3_DEFAULT
to handle all supported IPv6 protocol traffic. All ports must be configured withL3_DEFAULT
forwarding rules.gcloud compute forwarding-rules create fr-ilb-ipv6 \ --region=us-west1 \ --load-balancing-scheme=internal \ --subnet=lb-subnet \ --ip-protocol=L3_DEFAULT \ --ports=ALL \ --backend-service=be-ilb-l3-default \ --backend-service-region=us-west1 \ --ip-version=IPV6
For IPv4 traffic: Create a forwarding rule with the protocol set to
L3_DEFAULT
to handle all supported IPv4 protocol traffic. All ports must be configured withL3_DEFAULT
forwarding rules. Use10.1.2.99
as the internal IP address.gcloud compute forwarding-rules create fr-ilb-l3-default \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=L3_DEFAULT \ --ports=ALL \ --backend-service=be-ilb-l3-default \ --backend-service-region=us-west1
API
Create the health check by making a
POST
request to theregionHealthChecks.insert
method. ReplacePROJECT_ID
with the ID of your Google Cloud project.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks { "name": "hc-http-80", "type": "HTTP", "httpHealthCheck": { "port": 80 } }
Create the regional backend service by making a
POST
request to theregionBackendServices.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices { "name": "be-ilb-l3-default", "backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" } ], "healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80" ], "loadBalancingScheme": "INTERNAL", "protocol": "UNSPECIFIED", "connectionDraining": { "drainingTimeoutSec": 0 } }
For IPv6 traffic: Create the forwarding rule by making a
POST
request to theforwardingRules.insert
method.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-ipv6", "IPProtocol": "L3_DEFAULT", "allPorts": true, "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default", "ipVersion": "IPV6", "networkTier": "PREMIUM" }
For IPv4 traffic: Create the forwarding rule by making a
POST
request to theforwardingRules.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-l3-default", "IPAddress": "10.1.2.99", "IPProtocol": "L3_DEFAULT", "allPorts": true, "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default", "networkTier": "PREMIUM" }
Test your load balancer
The following tests show how to validate your load balancer configuration and learn about its expected behavior.
Test connection from client VM
This test contacts the load balancer from a separate client VM; that is, not from a backend VM of the load balancer.
gcloud:IPv6
Connect to the client VM instance.
gcloud compute ssh vm-client-ipv6 --zone=us-west1-a
Describe the IPv6 forwarding rule
fr-ilb-ipv6
. Note theIPV6_ADDRESS
in the description.gcloud compute forwarding-rules describe fr-ilb-ipv6 --region=us-west1
From clients with IPv6 connectivity, run the following command. Replace
IPV6_ADDRESS
with the ephemeral IPv6 address in thefr-ilb-ipv6
forwarding rule.curl -m 10 -s http://IPV6_ADDRESS:80
For example, if the assigned IPv6 address is
[fd20:1db0:b882:802:0:46:0:0/96]:80
, the command should look like:curl -m 10 -s http://[fd20:1db0:b882:802:0:46:0:0]:80
gcloud:IPv4
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Describe the IPv4 forwarding rule
fr-ilb
.gcloud compute forwarding-rules describe fr-ilb --region=us-west1
Make a web request to the load balancer by using
curl
to contact its IP address. Repeat the request so that you can see that responses come from different backend VMs. The name of the VM that generates the response is displayed in the text in the HTML response by virtue of the contents of/var/www/html/index.html
on each backend VM. Expected responses look likePage served from: vm-a1
.curl http://10.1.2.99
The forwarding rule is configured to serve ports
80
and53
. To send traffic to those ports, append a colon (:
) and the port number after the IP address, like this:curl http://10.1.2.99:80
Ping the load balancer's IP address
This test demonstrates an expected behavior: you can ping the IP address of the load balancer.
gcloud:IPv6
Connect to the client VM instance.
gcloud compute ssh vm-client-ipv6 --zone=us-west1-a
Attempt to ping the IPv6 address of the load balancer. Replace
IPV6_ADDRESS
with the ephemeral IPv6 address in thefr-ilb-ipv6
forwarding rule.Notice that you get a response and that the
ping
command works in this example.ping6 IPV6_ADDRESS
For example, if the assigned IPv6 address is
[2001:db8:1:1:1:1:1:1/96]
, the command is as follows:ping6 2001:db8:1:1:1:1:1:1
The output is similar to the following:
@vm-client: ping
IPV6_ADDRESS
PINGIPV6_ADDRESS
(IPV6_ADDRESS
) 56(84) bytes of data. 64 bytes fromIPV6_ADDRESS
: icmp_seq=1 ttl=64 time=1.58 ms
gcloud:IPv4
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Attempt to ping the IPv4 address of the load balancer. Notice that you get a response and that the
ping
command works in this example.ping 10.1.2.99
The output is the following:
@vm-client: ping 10.1.2.99 PING 10.1.2.99 (10.1.2.99) 56(84) bytes of data. 64 bytes from 10.1.2.99: icmp_seq=1 ttl=64 time=1.58 ms 64 bytes from 10.1.2.99: icmp_seq=2 ttl=64 time=0.242 ms 64 bytes from 10.1.2.99: icmp_seq=3 ttl=64 time=0.295 ms
Additional configuration options
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
You can reserve a static internal IP address for your example. This configuration
allows multiple internal forwarding rules to use the same IP address with
different protocols and different ports.
The backends of your example load balancer must still be located in the region
us-west1
.
The following diagram shows the architecture for this example.
You can also consider using the following forwarding rule configurations:
Forwarding rules with multiple ports:
- Protocol
TCP
with ports80,8080
- Protocol
L3_DEFAULT
with portsALL
- Protocol
Forwarding rules with all ports:
- Protocol
TCP
with portsALL
- Protocol
L3_DEFAULT
with portsALL
- Protocol
Reserve static internal IPv4 address
Reserve a static internal IP address for 10.1.2.99
and set its
--purpose
flag to SHARED_LOADBALANCER_VIP
. The --purpose
flag
is required so that many forwarding rules can use the same internal
IP address.
gcloud
Use the gcloud compute addresses create
command:
gcloud compute addresses create internal-lb-ipv4 \ --region us-west1 \ --subnet lb-subnet \ --purpose SHARED_LOADBALANCER_VIP \ --addresses 10.1.2.99
API
Call the
addresses.insert
method.
Replace PROJECT_ID
with the ID of your
Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/addresses
The body of the request must include the addressType
, which should be
INTERNAL
, the name
of the address, and the subnetwork
that
the IP address belongs to. You must specify the address
as 10.1.2.99
.
{ "addressType": "INTERNAL", "name": "internal-lb-ipv4", "subnetwork": "regions/us-west1/subnetworks/lb-subnet", "purpose": "SHARED_LOADBALANCER_VIP", "address": "10.1.2.99" }
Configure load balancer components
Configure three load balancers with the following components:
- The first load balancer has a forwarding rule with protocol
TCP
and port80
. TCP traffic arriving at the internal IP address on port80
is handled by theTCP
forwarding rule. - The second load balancer has a forwarding rule with protocol
UDP
and port53
. UDP traffic arriving at the internal IP address on port53
is handled by theUDP
forwarding rule. - The third load balancer has a forwarding rule with protocol
L3_DEFAULT
and portALL
. All other traffic that does not match theTCP
orUDP
forwarding rules is handled by theL3_DEFAULT
forwarding rule. - All three load balancers share the same static internal IP address
(
internal-lb-ipv4
) in their forwarding rules.
Create the first load balancer
Create the first load balancer for TCP traffic on port 80
.
gcloud
Create the backend service for TCP traffic:
gcloud compute backend-services create be-ilb \ --load-balancing-scheme=internal \ --protocol=tcp \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the instance group to the backend service:
gcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
Create a forwarding rule for the backend service. Use the static reserved internal IP address (
internal-lb-ipv4
) for the internal IP address.gcloud compute forwarding-rules create fr-ilb \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=internal-lb-ipv4 \ --ip-protocol=TCP \ --ports=80 \ --backend-service=be-ilb \ --backend-service-region=us-west1
API
Create the regional backend service by making a
POST
request to theregionBackendServices.insert
method. ReplacePROJECT_ID
with the ID of your Google Cloud project.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices { "name": "be-ilb", "backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" } ], "healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80" ], "loadBalancingScheme": "INTERNAL", "protocol": "TCP", "connectionDraining": { "drainingTimeoutSec": 0 } }
Create the forwarding rule by making a
POST
request to theforwardingRules.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb", "IPAddress": "internal-lb-ipv4", "IPProtocol": "TCP", "ports": [ "80" ], "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb", "networkTier": "PREMIUM" }
Create the second load balancer
Create the second load balancer for UDP traffic on port 53
.
gcloud
Create the backend service with the protocol set to
UDP
:gcloud compute backend-services create be-ilb-udp \ --load-balancing-scheme=internal \ --protocol=UDP \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the instance group to the backend service:
gcloud compute backend-services add-backend be-ilb-udp \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
Create a forwarding rule for the backend service. Use the static reserved internal IP address (
internal-lb-ipv4
) for the internal IP address.gcloud compute forwarding-rules create fr-ilb-udp \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=internal-lb-ipv4 \ --ip-protocol=UDP \ --ports=53 \ --backend-service=be-ilb-udp \ --backend-service-region=us-west1
API
Create the regional backend service by making a
POST
request to theregionBackendServices.insert
method. ReplacePROJECT_ID
with the ID of your Google Cloud project.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices { "name": "be-ilb-udp", "backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" } ], "healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80" ], "loadBalancingScheme": "INTERNAL", "protocol": "UDP", "connectionDraining": { "drainingTimeoutSec": 0 } }
Create the forwarding rule by making a
POST
request to theforwardingRules.insert
method:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-udp", "IPAddress": "internal-lb-ipv4", "IPProtocol": "UDP", "ports": [ "53" ], "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-udp", "networkTier": "PREMIUM" }
Create the third load balancer
Create the forwarding rule of the third load balancer to use the static reserved internal IP address.
gcloud
Create the forwarding rule with the protocol set to L3_DEFAULT
to handle
all other supported IPv4 protocol traffic. Use the static reserved
internal IP address (internal-lb-ipv4
) as the internal IP address.
gcloud compute forwarding-rules create fr-ilb-l3-default \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=internal-lb-ipv4 \ --ip-protocol=L3_DEFAULT \ --ports=ALL \ --backend-service=be-ilb-l3-default \ --backend-service-region=us-west1
API
Create the forwarding rule by making a POST
request to the
forwardingRules.insert
method.
Replace PROJECT_ID
with the ID of your
Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-l3-default", "IPAddress": "internal-lb-ipv4", "IPProtocol": "L3_DEFAULT", "ports": [ "ALL" ], "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default", "networkTier": "PREMIUM" }
Test your load balancer
To test your load balancer, follow the steps in the previous section.
What's next
- To learn about important concepts, see Internal passthrough Network Load Balancer overview.
- To learn to configure failover, see Configure failover for internal passthrough Network Load Balancer.
- To learn about configuring Logging and Monitoring for internal passthrough Network Load Balancers, see Internal passthrough Network Load Balancer logging and monitoring.
- To learn about troubleshooting, see Troubleshoot internal passthrough Network Load Balancers.
- Clean up a load balancing setup.