L2 Networking with Oracle Cloud VMware Solution

The release of Oracle Cloud VMware Solution also introduced a new concept into public cloud networking, VLANs. Yes VLANs!!! This is not a new concept in networking, but L2 and VLANs were generally unheard of when talking about hyperscale cloud, up until now.

In order to provide a VMware solution natively on the same infrastructure as the core services, it was necessary to add L2 features to the Virtual Cloud Networking portfolio. OCI has added VLANs to the configuration with the functionality required to support the necessary requirements for VMware functionality.

  • As the name suggests, VLANs are broadcast domains and there will be L2 communication between instances connected to the same VLAN.
  • VLANs have a route-table associated with it, to provide communication communication to destinations outside the VCN, where the VLAN resides.
  • Each VLAN has an IP Block associated with it and the first IP Address from the CIDR will be used as the default gateway for instances in the VLAN. This is synonymous to an SVI in the on-prem world.
  • Security is a top priority in OCI and each VLAN has a Network Security Group attached to it. This provide the ingress / egress traffic control.
  • L2 features such as MAC Learning, ARP etc. are available only within the VLAN.
  • VLANs are added as secondary vNICs on the ESXi hosts in the SDDC.

The details for the VLANs created as part of the provisioning process are given below:

NamePurpose
vSphereManagement Segment
vSANUsed for vSAN Traffic
vMotionUsed for vMotion Traffic
NSX VTEPGeneve encapsulated Traffic for East-West Communication
NSX Edge VTEPGeneve encapsulated Traffic Between Hosts and NSX Edges
NSX Edge Uplink 1Uplink for North-South Traffic
NSX Edge Uplink 2Uplink for North-South Traffic (Initially Unused)

Note: VLANs are currently only available for Oracle Cloud VMware Solution. A VLAN is an AD Local construct and does not span regions. Multicast traffic is treated as a broadcast within a VLAN.


Traffic Flows

Let’s look at the various traffic flows involved in communication between vNICs connected to a VLAN. Ths figure below highlights the basic flows of inter-vlan, intra-vlan & VLAN to Subnet communication.

Intra-VLAN Communication

This is the simplest traffic flow and is exactly as you would expect for communicating between workloads in the same VLAN.

  1. Host 1 sends an ARP request for the Host 1, to establish communication.
  2. Host 2 sends an ARP response to vNIC1.
  3. Host1 sends the respective packet to Host 2 using the learned MAC Address.

This is shown by the green arrow in the above figure.

Inter-VLAN, Subnet to VLAN and External Communication

Communication between Hosts in different VLANs or between Host in a VLAN and an instance connected to a subnet, is also standard IPv4 communication. In this flow, the packet is traversing L3 boundaries and hence packets would go through the SVI for the VLAN.

The following explanation assumes that Host 1 is in VLAN A and Host 2 is in VLAN B or in a Subnet.

  1. Host 1 sends an ARP request for it’s default gateway, which is the IPv4 address of the VLAN SVI.
  2. On receiving and ARP response, Host 1 sends the packet to the SVI.
  3. This is where the flow will slightly differ.
    • In case Host 2 is in a Subnet, the SVI will do a lookup for the MAC Address of Host 2 in the VNIC mapping and forwards the traffic to Host 2.
    • In case Host 2 is in VLAN B and the SVI does not have an entry in it’s MAC table, it will send an ARP request to resolve the MAC address of Host 2 and will forward the traffic to the host once an ARP response is received.

The same flow will also apply for routing traffic via the OCI Gateways.

VLAN to External Communication

The communication between Hosts in a VLAN and external destinations that require traversing OCI Gateways (DRG, NAT Gateway, Internet Gateway etc.) will be similar to the Inter-VLAN / Subnet to VLAN flows with the difference that the external destinations will be entered in the VLAN route-table.

Let’s analyze a situation where Host A in VLAN A needs to communication with a server in your on-prem data center.

  1. Host 1 sends an ARP request for it’s default gateway, which is the IPv4 address of the VLAN SVI.
  2. On receiving and ARP response, Host 1 sends the packet to the SVI.
  3. Once the packet arrives at the SVI, it will do a lookup in the route-table attached to the VLAN.
  4. The traffic will be forwarded to the route-target configured in the route-rules, which will be the DRG in case of on-prem destinations.

Communication between NSX Overlay and External Destinations

We looked at generic flows for the workloads connected to VLANs. The flow that we need to understand in details, is where the communication needs to happen between the NSX Overlay and destinations outside the SDDC. This is where things become slightly OCI specific and we need to understand some additional steps, before we move onto routing traffic between your SDDC and the external world.

In order to route traffic from the VCN to NSX Overlay, we need to forward the traffic to the NSX Edge Uplink. To achieve this, we need to add a static route pointing to the HA VIP for the NSX Edge. But before we add the entry in the VLAN route-table, we need to ensure we have a mapping for the IP Address of the uplink in the VCN.

This is completed for you by the provisioning service, but it is important to understand the concept, as you will need to execute this step when you want to use the 2nd (initially unused) NSX Edge Uplink and add Public Access to a VM in your SDDC.

The VLAN we are going to talk about here is the uplink VLAN, NSX Edge Uplink1 in this case. We will be using static routes for enabling communication between the SDDC and the VCN.

Let’s look at what the base setup looks like once the provisioning service delivers the SDDC.

  • NSX Edge Cluster will be configured in Active / Standby mode.
  • The VLAN used for the Uplink is using the 172.16.0.128/25 network and 172.16.0.129 is the GW or SVI for this VLAN.
  • Tier-0 will be created and configured with a single uplink.
  • NSX Edge HA VIP will be configured. The IP Address assigned is 172.16.0.131 in our case.
  • NSX Edge will have a default route sending all traffic to 172.16.0.129 for external destinations.
  • Tier-1 Router is created and connected to the Tier-0 router.
  • If you provide the IP Block for the Logical Switch to be created, the LS will be configured. We are using 192.168.192.0/24 in our case. This will be connected to the Tier-1 router.

This is the point where the entry is added to the VLAN to allow External Communication to the SDDC using a private IP. The private IP in this case is the HA VIP. We will use the option to add route-target only.

Once we complete the above step, the private IP is registered in the VCN, an OCID is generated for the same and allows us to use it as a target in the route-table entries.

The resulting setup will look like the figure below and this is where we can start establishing communication with the world outside your SDDC.

At this point, your VCN and OCVS environment is ready to start integrating with your desired systems in your on-premises DC, on the internet or with Oracle Cloud Services. In order to assist with the VCN side of configuration, OCVS comes with a set of Networking Quick Actions. We will be looking at these in my next post.


Additional Resources

Click here, for more information on SDDC L2 Network Resources.

Leave a Reply