Category Archives: VCF

Deploying vRealize Suite in VCF 4.x with VLAN-Backed Networks

When deploying VMware Cloud Foundation (VCF) I can’t recommend enough that you deploy with BGP/AVNs. This will make your life easier in the future when deploying the vRealize Suite as well as making deployment and administration easier for Tanzu. What happens though if you can’t get your network team to support BGP? This is where VLAN-Backed networks come in.

First we should start the download of the bundles for the vRealize suite. Within SDDC Manager go to Lifecycle Management–>Bundle Management and download all of the vRealize bundles. Now that we have the bundles downloading lets move on to the vRealize Suite tab.

Usually the first indication that BGP/AVNs were not deployed comes from the vRealize deployment screen. Notice that the “Deploy” button is greyed out at the bottom with the message saying that the deployment isn’t available because there is no “X-Region Application Virtual Network”.

No problem, we are going to follow the VMware KB 80864 to create two edge nodes, and add our VLAN-Backed networks to SDDC Manager. When looking at the KB you will notice three attachments. The first thing we want to do is open the validated design PDF.

First we are going to need some networks created and configured. Part of the workflow will deploy a Tier-0 gateway where the external uplink are added. If you are not planning on using the Tier-0 gateway for other use cases (Tanzu), then the VLAN IDs and IP addresses you enter for the uplink networks do not need to exist in you environment. In my experience it is better to create all of these VLANs and subnets in case you want to use them later. The Uplinks don’t have to be a /24. You should be able to use something smaller like a /27 or /28. Every edge node will use two IPs for the overlay network. The edge overlay VLAN/subnet needs to be able to talk to the host overlay VLAN/subnet defined when you deployed VCF.

Next we have the networks that are going to be used by the vRealize Suite. Those two networks are Cross Region and Region Specific. The cross region is used for vRSLCM, vROPs, vRA, and Workspace ONE. The region specific network is used for Log Insight, region specific Workspace ONE and the remote collectors for vROPs.

Now that we have all of our network information, we need to copy the JSON example from pages 11/12 into a text editor like notepad++ or copy from the code below. This code is what we are going to use to deploy our edge VMs. Make sure DNS entries are created for both edge nodes and the edge cluster VIP. The management IPs should be on the same network as the SDDC Manager and vCenter deployed by VCF. Make the necessary changes using the networks discussed previously. In the next step we will get the cluster ID.

{
 "edgeClusterName":"sfo-m01-ec01",
 "edgeClusterType":"NSX-T",
 "edgeRootPassword":"edge_root_password",
 "edgeAdminPassword":"edge_admin_password",
 "edgeAuditPassword":"edge_audit_password",
 "edgeFormFactor":"MEDIUM",
 "tier0ServicesHighAvailability":"ACTIVE_ACTIVE",
 "mtu":9000,
 "tier0RoutingType":"STATIC",
 "tier0Name": "sfo-m01-ec01-t0-gw01",
 "tier1Name": "sfo-m01-ec01-t1-gw01",
 "edgeClusterProfileType": "CUSTOM",
 "edgeClusterProfileSpec":
   { "bfdAllowedHop": 255,
      "bfdDeclareDeadMultiple": 3,
      "bfdProbeInterval": 1000,
      "edgeClusterProfileName": "sfo-m01-ecp01",
      "standbyRelocationThreshold": 30
 },
 "edgeNodeSpecs":[
 {
 "edgeNodeName":"sfo-m01-en01.sfo.rainpole.io",
 "managementIP":"172.16.11.69/24",
 "managementGateway":"172.16.11.253",
 "edgeTepGateway":"172.16.19.253",
 "edgeTep1IP":"172.16.19.2/24",
 "edgeTep2IP":"172.16.19.3/24",
 "edgeTepVlan":"1619",
 "clusterId":"<!REPLACE WITH sfo-m01-cl01 CLUSTER ID !>",
 "interRackCluster": "false",
 "uplinkNetwork":[
      {
      "uplinkVlan":1617,
      "uplinkInterfaceIP":"172.16.17.2/24"
      },
      {
      "uplinkVlan":1618,
      "uplinkInterfaceIP":"172.16.18.2/24"
 }
 ]
 },
 {
 "edgeNodeName":"sfo-m01-en02.sfo.rainpole.io",
 "managementIP":"172.16.11.70/24",
 "managementGateway":"172.16.11.253",
 "edgeTepGateway":"172.16.19.253",
 "edgeTep1IP":"172.16.19.4/24",
 "edgeTep2IP":"172.16.19.5/24",
 "edgeTepVlan":"1619",
 "clusterId":"<!REPLACE WITH sfo-m01-cl01 CLUSTER ID !>",

 "interRackCluster": "false",
 "uplinkNetwork":[
 {
      "uplinkVlan":1617,
      "uplinkInterfaceIP":"172.16.17.3/24"
 },
 {
      "uplinkVlan":1618,
      "uplinkInterfaceIP":"172.16.18.3/24"
 }
 ]
 }
 ]
}

Within SDDC Manager on the navigation menu select “Developer Center” then click “API Explorer“. Expand “APIs for managing Clusters“. Click “Get /v1/clusters“, and click “Execute“. Copy the cluster ID into the script we were working on where is says to replace.

Now expand “APIs for managing NSX-T Edge Clusters“. Click “POST /v1/edge-clusters/validations. Copy the contents of your JSON file we created and paste into the “Value” text box then click “Execute“.

After executing, copy the “ID of the validation“.

Now that we have the validation ID we are going to see if the validation was successful. Expand “APIs for managing NSX-T Edge Clusters” and click “GET /v1/edge-clusters/validations/{id}“. We want to verify that the validation shows “SUCCEEDED“.

Great! Now we are ready to deploy. Expand “APIs for managing NSX-T Edge Clusters” and click “POST /v1/edge-clusters“. In the Value text box paste the validated JSON file contents and click “Execute“. We now see the edge nodes deploying and can follow the workflow in the Tasks pane from within SDDC Manager.

While the edge nodes deploy we are going to create a transport zone and apply it to our hosts, but we have to wait for the edge node creation task to create a new transport zone and segments. In a web browser log into the NSX-T Manager for the management domain. Once logged in navigate to “System–>Fabric–>Transport Zones“. Click “+Add Zone” and create a new transport zone for the vRealize suite.

Now that we have our transport zone we need to add it to both our hosts and edge nodes. Navigate to “System–>Fabric–>Nodes“. Drop down the “Managed by” menu and select your management vCenter. Click on “Host Transport Nodes” if not already selected and then click each individual server, then from the “Actions” drop down select “Manage Transport Zones“. In the “Transport Zone” drop down select the transport zone we created earlier and then click “Add“.

Next, make sure that the edge node creation completed successfully. Once we see that successful task, we need to go to “System–>Fabric–>Nodes“. From the “Managed by” menu drop down select your management vCenter. Click “Edge Transport Nodes“. Check the box for both edge nodes and then from the “Actions” menu select “Manage Transport Zones“. From the “Transport Zone” drop down select the new transport zone we created and click “Add“.

We only have one thing left to do within NSX Manager. We need to create the segments that will be used by SDDC Manager for the vRealize Suite. Navigate to “Networking–>Segments“. We are going to create two new segments. Click “Add Segment” and put the appropriate information in for the cross region network and then click “Save“. When prompted to continue configuring the segment click “No“.

Click “Add Segment” and put the appropriate information in for the region specific network and then click “Save“. When prompted to continue configuring the segment click “No“.

We are in the homestretch! The final piece to the puzzle is telling SDDC Manager about these new networks. Download the config.ini file and the avn-ingestion-v2 file to your computer. Make the appropriate changes to the config.ini (see example below).

[REGION_A_AVN_SECTION]
name=REPLACE_WITH_FRIENDLY_NAME_FOR_vRLI_NETWORK
subnet=REPLACE_WITH_SUBNET_FOR_vRLI_NETWORK
subnetMask=REPLACE_WITH_SUBNET_MASK_FOR_vRLI_NETWORK
gateway=REPLACE_WITH_GATEWAY_FOR_vRLI_NETWORK
mtu=REPLACE_WITH_MTU_FOR_vRLI_NETWORK
portGroupName=REPLACE_WITH_VCENTER_PORTGROUP_FOR_vRLI_NETWORK
domainName=REPLACE_WITH_DNS_DOMAIN_FOR_vRLI_NETWORK
vlanId=REPLACE_WITH_VLAN_ID_FOR_vRLI_NETWORK
[REGION_X_AVN_SECTION]
name=REPLACE_WITH_FRIENDLY_NAME_FOR_vRSLCM_vROPs_vRA_NETWORK
subnet=REPLACE_WITH_SUBNET_FOR_vRSLCM_vROPs_vRA_NETWORK
subnetMask=REPLACE_WITH_SUBNET_MASK_FOR_vRSLCM_vROPs_vRA_NETWORK
gateway=REPLACE_WITH_GATEWAY_FOR_vRSLCM_vROPs_vRA_NETWORK
mtu=REPLACE_WITH_MTU_FOR_vRSLCM_vROPs_vRA_NETWORK
portGroupName=REPLACE_WITH_VCENTER_PORTGROUP_FOR_vRSLCM_vROPs_vRA_NETWORK
domainName=REPLACE_WITH_DNS_DOMAIN_FOR_vRSLCM_vROPs_vRA_NETWORK
vlanId=REPLACE_WITH_VLAN_ID_FOR_vRSLCM_vROPs_vRA_NETWORK
[REGION_A_AVN_SECTION]
name=areg-seg-1631
subnet=192.168.100.128
subnetMask=255.255.255.192
gateway=192.168.100.129
mtu=9000
portGroupName=areg-seg-1631
domainName=corp.com
vlanId=1631

[REGION_X_AVN_SECTION]
name=xreg-seg-1632
subnet=192.168.100.192
subnetMask=255.255.255.192
gateway=192.168.100.193
mtu=9000
portGroupName=xreg-seg-1632
domainName=corp.com
vlanId=1632

Using a transfer utility (I used WinSCP) transfer both the config.ini and the avn-ingestion-v2 file to the SDDC Manager. I placed mine in /tmp. Next SSH into your SDDC Manager. Login as “VCF” and the type SU and enter to elevate to root. Change directory to /tmp and then type the following to change ownership and permissions:

chmod 777 config.ini
chmod 777 avn-ingestion-v2.py
chown root:root config.ini
chown root:root avn-ingestion-v2.py

From our putty session we will ingest the config.ini file into SDDC Manager. Use the following to accomplish this:

python avn-ingestion-v2.py --config config.ini

#Other Options:
# --dryrun (will validate the config.ini but won't commit the changes). 
# --erase (will clean up the AVN data in SDDC Manager)

We need to change the edge cluster that will be used.

vi /etc/vmware/vcf/domainmanager/application-prod.properties

Add the following line replacing “sfo-m01-ec01” with your edge cluster name and then save.

override.edge.cluster.name=sfo-m01-ec01

The last thing to do is restart the domainmanager service.

systemctl restart domainmanager

Success!! We are ready to deploy the vRealize Suite!

VCF 4.x Offline Bundle Transfer

I recently have had to do offline bundle transfers to bring updates into dark sites that could not pull the updates down automatically through SDDC Manager. One thing to note is that I am doing the steps on a Windows box and some of the commands might change slightly if on Linux.

First download the “Bundle Transfer Utility & Skip Level Upgrade Tool” from my.vmware.com. This tool can be found under the VMware Cloud Foundation section. Once downloaded, extract the files into a folder on the computer that will be used to download the updates. In my example I will be using the c:\offlinebundle. After extracting you should see a bin, conf, and lib folder. Also, you will need to make sure you have both a windows transfer utility such as WinSCP, and a secure ftp installation such as Putty.

4.2 ONLY!!!

In 4.2 there is a manifest file that must be downloaded from VMWare and then uploaded to the SDDC Manager before moving to the next steps.

From your Windows machine, open up an administrative command prompt and run the following to download the 4.2 manifest file. Note that you will have to change the username and password to your my.vmare.com credentials.

cd c:\offlinebundle\bin
lcm-bundle-transfer-util --download -manifestDownload --outputDirectory c:\offlinebundle --depotUser user@vmware.com -depotUserPassword userpass

This created a file called “lcmManifestv1.json”.

Next we use WinSCP to transfer the lcmManifestv1.json to the SDDC Manager. I put all of the files in /home/vcf. When logging into the SDDC Manager you will be using the account “VCF” and whatever password you configured for that account during deployment.

One transferred right click on the lcmManifestv1.json file and go to properties. Change the Permissions section to an Octal value of 7777. The other way you could do that is from the SDDC Manager using Putty with the following command:

chmod 7777 /home/vcf/lcmManifestv1.json

Once transferred to SDDC Manager, we need to ingest this manifest file into the manger. Using Putty log into the SDDC Manager with the username of VCF. Once logged in do the following (FQDN will need to be updated with yours):

cd /opt/vmware/vcf/lcm/lcm-tool/bin
./lcm-bundle-transfer-util --updae --sourceManifestDirectory /home/vcf/ --sddcMgrFqdn sddc-manager.vcf.sddc.lab --sddcMgrUser administrator@vsphere.local

All 4.x Versions

If you are running 4.0 or 4.1, this is where you want to begin your offline bundle journey.

Putty into your SDDC Manager VM if you have not already. Then run:

cd /opt/vmware/vcf/lcm/lcm-tools/bin
./lcm-bundle-transfer-util --generateMarker

These marker files will be created in /home/vcf. Using WinSCP, move the files from SDDC Manager to your windows c:\offlinebundle directory.

From your Windows admin prompt we are now going to download the bundles:

cd c:\offlinebundle\bin
lcm-bundle-transfer-util -download -outputDirectory c:\offlinebundle -depotUser user@vmware.com -markerFile c:\offlinebundle\markerfile -markerMd5File c:\offlinebundle\markerFile.md5

Notice all of the bundles available. If you only want a specific product version you would put a -p (version) in the above code. I just selected all 18 by pressing “y”.

The bundles will be downloaded first to a temp directory under c:\offlinebundle and then will eventually be in the c:\offlinebundle\bundles folder.

Next using WinSCP transfer the entire c:\offlinebundle folder up to SDDC Manager into the /nfs/vmware/vcf/nfs-mount/ directory. When complete it should look like this:

We need to change the permissions on this folder. You can either right click on the /nfs/vmware/vcf/nfs-mount/offlinebundle, go to properties, then change the Octal value to 7777 or from putty:

cd /nfs/vmware/vcf/nfs-mount
chmod -R 7777 offlinebundle/

The final step is to ingest the bundles into SDDC Manager. We do that by doing the following:

cd /opt/vmware/vcf/lcm/lcm-tools/bin
./lcm-bundle-transfer-util -upload -bundleDirectory /nfs/vmware/vcf/nfs-mount/offlinebundle/

If you now log into the SDDC Manager GUI you will see the bundles start to be ingested. Once complete, you should be able to update your environment as needed.

VCF Lab Constructor (VLC), used differently.

It has been a long time since I posted anything new, I will be looking to get back on top of posting things consistently for 2021. I have been using VLC for almost a year now and it has come a long way since its humble beginnings. I have deployed many different VMware Cloud Foundation (VCF) environments using this tool, but I also use it to quickly deploy virtual hosts for other testing. For my example today I will be deploying 3 hosts, adding them to a cluster, and then I will turn on HCI Mesh (Datastore Sharing) to use the storage of my physical vSAN cluster.

First, you will need to sign up and download VLC from http://tiny.cc/getVLCBits, the build I am using is for VCF 4.1. After signing up you will also get information about joining the #VLC-Support slack channel. If you have any issues with VLC, this is a great place to get quick answers. You will also need to download whatever ESXi version you will be using from myvmware.com. In my example I will be using 7.0.1 (16850804). After downloading VLC, unzip it to your C:\ drive. Follow the install guide in the zip file to create the vDS and portgroup used for VLC.

Using your favorite editor, edit the “add_3_hosts.json” file. Change the name as you see fit for each host. You can increase or decrease CPU, Memory, and the disks being added to these VMs. Set the management IP information as well. I have included the code for my installation. Once complete, save this file.

{
    "genVM": [
      {
        "name": "esxi-1",
        "cpus": 4,
        "mem": 16,
        "disks": "10,10,50,50,50",
		"mgmtip": "192.168.1.60",
		"subnetmask":"255.255.255.0",
		"ipgw":"192.168.1.1"
      },
      {
        "name": "esxi-2",
        "cpus": 4,
        "mem": 16,
        "disks": "10,10,50,50,50",
		"mgmtip": "192.168.1.61",
		"subnetmask":"255.255.255.0",
		"ipgw":"192.168.1.1"
      },
      {
        "name": "esxi-3",
        "cpus": 4,
        "mem": 16,
        "disks": "10,10,50,50,50",
		"mgmtip": "192.168.1.62",
		"subnetmask":"255.255.255.0",
		"ipgw":"192.168.1.1"
      }
    ]
}

Next, right click on the Powershell Script “VLCGui” and then select “Run with Powershell”.

The Lab Constructor GUI will appear. Choose the “Expansion Pack!” option.

Input your main VLAN ID, then click on the “Addtl Hosts JSON” box and select the “add_3_hosts.json” we edited earlier. Click on the ESXi ISO location and choose the ISO that you should have downloaded earlier…you didn’t skip ahead did you? Input your password as well as NTP, DNS, and domain information. On the right side of the window input your vCenter credentials and hit Connect. Once connected it will show you what clusters, networks, and datastores are supported. The cluster I wanted to use (lab-cl1) is not showing up, this was because I had vSphere HA enabled.

Once I turned off HA on the cluster it appeared for me to select. I chose my VLC network as well and my physical vSAN datastore “vsanDatastore”. My VLC network is configured for trunking and has Promiscuous mode, MAC address changes, and Forged transmits all set to “Accept”. Click “Validate” and then click “Construct”.

You will see Powershell start to deploy the ESXi hosts. You can monitor vCenter until complete, total time to build 3 hosts was just under 10 minutes.

Now create a new cluster and add these three hosts to the cluster. When completed you will have 3 hosts in a new cluster that are all in maintenance mode.

We now enable vSAN on the cluster by right clicking on the cluster, choosing Settings–>vSAN–>Services–>Configure. I went with the default options and did not choose any disks to be consumed for vSAN so my vSAN datastore shows 0 capacity and 0 free space. We will use Quickstart to configure the hosts further. If I enable vSAN and then try to use Datastore Sharing, it won’t let me configure it because the required vSAN networking is not configured yet.

Click on your cluster–>Configure–>Quickstart. In the Quickstart configuration view you should see 3 boxes, in the box on the right click the “Configure” button. We first configure our distributed switch. I already had one created that I wanted to use so I selected that to use for my vSAN network, added a new portgroup name, and then chose the two adapters I wanted to use.

Next we configure the information for the VMkernel adapters. I have a VLAN that I use for all of my vSAN traffic (30), then I add the information for the static IPs I want to use from that subnet. Use the Autofill option…it will save you time.

I did not make any changes on the Advanced Options, I did not claim any disks for vSAN, and I did not configure a proxy. Click “Next” until you get to the review screen. If satisfied with your choices, click “Finish.

Once the changes are complete from Quickstart, click on your cluster, then “Configure”, and then Datastore Sharing. Notice I still show a vsanDatastore (3) but it has not space. Click “MOUNT REMOTE DATASTORE”.

I chose “vsanDatastore” which is my physical storage for this cluster, all of the other datastores you see here are virtual.. Click “Next” and notice that this time our compatibility check is all green now because the vSAN networking is configured. Click “Finish”.

Now that we mounted our datastore, lets create a new VM on it. I just selected all of the defaults, but you could use a template to test with if you already had one deployed.

Let’s power up the VM. We now have a VM deployed in our HCIMesh cluster using the vSAN datastore from my lab-cl1 cluster.

This is just one example of some quick testing I did because VLC helped me to deploy my ESXi hosts quickly. I hope you found it helpful.