vSphere 7.0 with Kubernetes – Nested Setup – Zero to Hero

This blog post is a prettified version of my notes on how to deploy vSphere 7.0 with Kubernetes. You can use the same process to install any nested environment. for example, this architecture is perfect to install PKS as well **

Hosted environment setup

The base architecture of the nested environment is NSX-T nested within NSX-T with a nested vSAN deployment for storage. The hosted environemnt should have at least three physical ESXi servers and NSX-T 2.5+ installed and configured. You can change the architecture to implement vDS port groups based deployment instead of NSX-T logical segments. Pay attention to the configuration of the hosted network and configure the port group as if they are the NSX-T logical segments.

Hosted NSX

The ESXi server should be prepared for NSX-T installation with an overlay and uplink networks. The hosted NSX-T would be configured with a T0 to uplink the communication.

The following diagram represents the NSX-T configuration:

image2020-4-13_10-54-50.png

IP Pools:

NameIP Range
Management

12.12.12.0/24

Uplink12.12.13.0/24
Ingress12.15.0.0/16
Egress12.16.0.0/16
POD’s12.13.0.0/16
Nodes12.14.0.0/16

Start by creating a T1 router for the nested environment under the main T0

image2020-4-12_21-21-0.png
image2020-4-12_21-21-25.png

Create two logical segments profiles to enable MAC learning and disable SW security. Create one for MAC and one for security

image2020-4-12_21-25-15.png

Create two segments for the nested environment. One for Uplink and one for Management and nested overlay

image2020-4-12_21-20-40.png

When creating the segments connect the segment into the T1 and change the segment profile to the new ones you created with MAC learning and without any SW security.

image2020-4-12_21-22-42.png

Finish the configuration of the T1 for the nested environment

the most important step of the preparation is SW profile. without the configuration of the segment profile, the nested T0 won’t be able to connect externally.

Create static routes on the T1 router for the nested environment.  The following static routes need to be created: Ingress, Egress, Pods and Nodes

image2020-4-12_21-58-41.png

Add Route advertisement

image2020-4-13_0-28-47.png

Hosted ESX

Hosted ESXi Setup – if you would like  use vSAN in the nested environment you need to run the following command:

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1

Nested environment setup

Prepare the nested environment.

  • If you don’t have any domain, NTP, DNS or you want to use internal services for all, you need to create a Windows server with all of these roles to leverage them in the nested environment. they can all be under the same Windows server and you can use the same machine as a Jump server for the environment as well.
  • Important – before you start the installation, create DNS records for all components ESXis, VCSA, NSX Manager and NSX edge.
  • If you don’t want to use the WSL of Windows you will need a Linux jump machine to connect and manage the k8s cluster and service.
  • The installation of the nested environment will present an external service to the internal environment. That means that the NSX manager, the edge, the vCenter and the domain will be installed in the hosted environment and not internally. See  the diagram above to better understand the architecture of the nested environment.
  • Configure the ESXi VM
    • Create a new virtual machine
image2020-4-12_22-11-17.png
  • Choose a name, compute and storage resources, compatibility and in the guest OS, choose Other – VMware ESXi 6.5 or later
image2020-4-12_22-13-29.png
  • In the customize HW there are several important things to do. The base we will use vSAN in the nested environment
    • CPU configuration – enable Expose HW assisted virtualization
image2020-4-12_22-14-56.png
  • Configure memory according to what you have in the hosted environment.
    • Hard disks – we will add two disks on top of the system on for the use of SAN. I’m adding a small one for cache and a larger one for capacity
image2020-4-12_22-18-25.png
  • Add another network device. Change the network configuration to connect to the management switch you created in the nested NSX. One vNIC will be for management and the other one, vmnic1, for the nested NSX overlay.
image2020-4-12_22-19-33.png
  • Important – change the VM boot option to BIOS, without doing that you won’t be able to boot from CD (iso image)
image2020-4-12_22-20-20.png
image2020-4-12_23-15-55.png
  • Configure the small disk for the system installation
image2020-4-12_23-16-31.png
  • Configure password and go to configure the management network
image2020-4-12_23-27-44.png
  • Configure the IP, GW and DNS for the ESX. Important – configure the DNS and the domain suffix
image2020-4-12_23-29-12.png
image2020-4-12_23-32-27.pngimage2020-4-12_23-30-5.png
  • Don’t forget to commit the changes and press Y
image2020-4-12_23-30-33.png
  • Continue the configuration on the ESX web browser
    • On the networking configure the virtual swit5ch for 9000 MTU and change the security settings to accept everything
image2020-4-13_0-6-28.png
  • On the VMkernel tab, configure vmk0, change the MTU to 1800 and add vMotion to the settings below **because we are changing the management VMkernel you will experience a short disconnect to the ESX **
    we will use the VM network and the default VMkernel for everything. vMotion, Management and VSAN.
image2020-4-13_0-7-54.png
  • Important – change the TCP/IP stack and add the domain configuration to the Default TCP/IP stack
image2020-4-13_0-9-10.png
  • Go to the Manage section, in Service, start the SSH service (TSM-SSH)
    • Repeat the configuration for all ESX’s and don’t add them to the vCenter before finishing the configuration according to the upper flow
  • Configure the vCSA
    • Download the vCSA appliance 7.0 and ESXi 7.0 from My VMware https://my.vmware.com/group/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/7_0
    • Deploy the vCSA from a win machine that has access to the management logical SW and can resolve the name of the vCSA. the vCSA has two steps, in the first one configure the hosted vCSA as the host to create the nested vCSA in. configure the IP of the vCSA according to the DNS record and be sure you have connectivity to the management logical switch. Important – configure NTP in the installation that is up and in sync.
image2020-4-12_23-1-51.png
image2020-4-12_23-2-28.pngimage2020-4-12_23-3-16.png
  • You can stay with tiny installation if the NSX installation is external and not embedded
image2020-4-12_23-4-7.png
  • You can enable thin disk mode as its a lab environment.
image2020-4-12_23-4-25.png
  • Choose the management logical switch and configure the network parameters
image2020-4-12_23-6-21.png
  • That is the first step of the installation if everything is right and you have connectivity and name resolution for the vCSA machine you will get to the next screen:
image2020-4-12_23-20-19.png
  • Continue to the configuration of the next step and configure the NTP and enable SSH 
image2020-4-12_23-22-15.png
  • Configure the SSO domain for the nested environment.
image2020-4-12_23-22-38.png
  • Click OK to finish the installation
image2020-4-12_23-23-7.png
  • Connect to the vCSA to start the configuration of the nested environment.
image2020-4-12_23-57-49.png
  • you need a set of licenses for the nested env. license for vCenter, vSphere with k8s and VSAN
image2020-4-12_23-59-52.png
  • Create a new Datacenter and a new Cluster, don’t configure anything on the cluster side yet we will do that later.
image2020-4-13_0-1-33.png
image2020-4-13_0-1-57.png
  • After you finish the ESX configuration according to the steps above, you can add them into the cluster we created
image2020-4-13_0-32-25.png
  • Add the ESX servers with the FQDN and accept the certificate message
    • Get the servers out of maintenance mode
image2020-4-13_0-34-4.png
  • to configure the vSAN, DRS and HA (all needs to be enabled for k8s services) we need to enable vSAN on the VMkernel on each ESXi. go to Configure in each ESX, VMkernel adapters and edit VMkernel0
image2020-4-13_0-35-42.png
  • check again the MTU (should be 1800) and check the vSAN on the button. do that for all ESX’s before enabling vSAN.
  • go to the Cluster and press configure, go to vSAN services and press Configure. stay with the default configuration:
image2020-4-13_0-37-36.png
image2020-4-13_0-38-7.png
  • claim the smaller disks to cache (change the Drive type to be Marked as Flash) and the larger ones for capacity
image2020-4-13_0-39-14.png
  • next and finish. check the Storage on the vCenter after the tasks are finished to see the vsanDatastore with 1TB of capacity (with 4 ESX’s)
image2020-4-13_0-41-56.png
  • now we need to TAG the vsanDatastore to use it as part of our vSphere with k8s env. the system will create a storage class in the supervisor cluster according to that tags. right-click the vsanDatastore and choose Assign Tag
image2020-4-13_0-44-20.png
  • Add Tag – add Tag Category
image2020-4-13_0-45-8.png
  • Add the Tag and the new category
image2020-4-13_0-45-36.png
  • don’t forget to assign the Tag to the vsanDatastore
image2020-4-13_0-46-17.png
  • create a storage policy based on the Tag we created. Click Menu – Policies and Profiles – VM Storage Policies
image2020-4-13_10-4-33.png
  • enable tag based placement rules
image2020-4-13_10-5-37.png
  • choose the Tag category we created in the steps before and browse tags to choose the k8s-storage-policy tag
image2020-4-13_10-6-30.png
  • choose the vsanDatastore as the Datastore to use and click finish.
    • go to the Cluster configuration in the vCenter and enable DRS and HA on the cluster
image2020-4-13_0-46-52.png
image2020-4-13_0-47-15.png
image2020-4-12_23-35-53.png
  • medium side manager is fine for nested env. the EDGE must be at least Large
image2020-4-12_23-38-50.png
  • choose the management SW you created in the hosted env.
image2020-4-12_23-39-19.png
  • the passwords for the Root and Admin users must be completed and long, without that you will have problems to manage the manager machine
image2020-4-12_23-40-47.png
  • the network configuration should be according to the hosted env. management IP from the logical SW in the hosted env. It’s important to configure a proper DNS and NTP
image2020-4-12_23-42-55.png
image2020-4-12_23-43-30.png
  • enable SSH as we will need to configure the EDGE connectivity after the installation.
    • deploy the NSX Edge in the hosted vCSA
image2020-4-12_23-47-8.png
  • IMPORTANT – DEPLOY LARGE EDGE (at least) if you want to use the k8s services
image2020-4-12_23-48-2.png
  • the network configuration should be two vNICs to management and two for Uplink. both SW’s from the hosted network
image2020-4-12_23-49-38.png
  • the passwords for the Root and Admin users must be completed and long, without that you will have problems to manage the manager machine
image2020-4-12_23-51-33.png
  • the network configuration should be according to the hosted env. management IP from the logical SW in the hosted env. It’s important to configure a proper DNS and NTP
image2020-4-12_23-53-9.png
image2020-4-12_23-54-35.png
  • NSX configuration
    • after the deployment you will need to login to the NSX manager to enter the license under System – Licenses to be able to join the EDEG to the manager
image2020-4-13_1-17-34.png
  • SSH to the manager and the edge. in the nsx manager run the following command:
    openso-pacific-nsx-mgr> get certificate api thumbprint e82de2f251cdf59107f38e9afbe379673cc24a5b35c46a015788e48ddcacecd1
  • we will use the output to join the EDGE into the management plane. in the EDGE run the following command:
    openso-nsx-edge> join management-plane 12.12.12.10 thumbprint e82de2f251cdf59107f38e9afbe379673cc24a5b35c46a015788e48ddcacecd1 username admin
  • insert the admin password and you should be able to see the following output after running the following command
image2020-4-13_1-21-11.png
  • check connectivity with get managers command
    openso-nsx-edge> get manager - 12.12.12.10      Connected (NSX-RPC) *
  • now to configure the overlay and uplink in the nested env. first, the creation of the transport zones must be with “Nested=true” value to have the nested connectivity. install postman or create an API call to create the overlay and VLAN transport zones.
  • with postman we will create a POST command for each TZ one for VLAN and one for Overlay with the URL https://12.12.12.10/api/v1/transport-zones/ and body:
    { "description":"VLAN", "display_name": "VLAN", "nested_nsx": true, "host_switch_name": "VLAN", "transport_type": "VLAN" }
    { "description":"Overlay", "display_name": "Overlay", "nested_nsx": true, "host_switch_name": "OVERLAY", "transport_type": "OVERLAY" }
image2020-4-13_1-6-11.png
image2020-4-13_1-22-8.png
  • now to add the vCenter to the management plane of the NSX. in the NSX manager go to System – Fabric – Computer Managers. Importnat – check the Enable Trust mode
image2020-4-13_1-23-20.png
  • create a new IP pool for the overlay connectivity, that’s not exposable, you can choose whatever you want. go to Networking – IP Address Pools add IP address pool
image2020-4-13_1-29-51.png
  • add subnet by clicking on set and add subnet – IP range
image2020-4-13_1-31-9.png
image2020-4-13_1-31-40.png
  • now to create a transport node profile for the ESX’s, go back to System – Fabric – Profiles – Transport zones profile
image2020-4-13_1-24-40.png
  • choose the new overlay transport zone we created in the TZ section. NIOC will stay default. Uplink profile will be “nsx-edge-single-nic-uplink-profile”, we don’t use HA in our nested env. everything is on single NIC. LLDP is Disable.
image2020-4-13_1-27-12.png
  • choose the IP Pool you created before and in the Pysical NIC’s enter vmnic1 and press enter. that will configure the second vmnic of the nested ESX’s to connect to the overlay network
image2020-4-13_1-35-44.png
  • go to the Nodes to configure the transport zone profile to the ESX’s
image2020-4-13_1-37-53.png
  • check the cluster level and choose the Configure NSX to configure the TZ profile and press apply
image2020-4-13_1-38-45.png
  • check the status of the server after the configuration it should all be up and running
image2020-4-13_8-59-55.png
  • now to configure the EDGE node transport zones. in the EDGE transport zone, choose the edge and press Edit
    • Configure the Overlay with the same parameters we configured to the ESX’s. in the Virtual NIC’s choose fp-eth0 that should be the second VNIC with the Management SW connectivity 12.12.12.0/24
image2020-4-13_1-42-47.png
  • add a second SW for the Uplink and configure the VLAN transport zone according to the PrintScreen. the Virtual NIC’s should be the third VNIC with the Uplink SW connectivity (the second hosted segment) 12.12.13.0/24
image2020-4-13_1-45-29.png
  • go to the EDGE Clusters and add an Edge Cluster. add your only EDGE to the cluster and apply.
image2020-4-13_1-39-52.png
  • check the connectivity of the Overlay by connecting to one of the ESX’s and run the following command
    [root@openso-pacific-esx-01:~] esxcfg-vmknic -l
  • the output should be the IP’s of the VMkernels
image2020-4-13_9-2-8.png
  • vmk10 is the overlay network and we can see the IP from the IP Pool that was assigned. now ping the other ESX’s and the NSX EDGE with the following command
    the command will ping from vmk10 with an MTU of 1600 to check the functionality of the overlay will work properly. the IP’s for the rest of the ESX’s and the EDGE will be sequential 10-11-12-13 for ESX’s and 14 for the EDGE

     

    [root@openso-pacific-esx-01:~] vmkping ++netstack=vxlan -s 1572 -d -I vmk10 192.168.113.11
image2020-4-13_9-6-28.png
  • we can see there is connectivity between all components, we can start the configuration of the Uplink to the hosted T0.
  • configure the Uplink connectivity from the Nested to the Hosted env.
  • first, add Segment, give it a name “Uplink”, don’t connect the Segment yet, change the transport zone to the new VLAN  transport zones we created and add VLAN0 to the VLAN box
image2020-4-13_9-21-18.png
  • save the segment and create a new nested T0 in the Networking – T-0 Gateways section.
    • configure the HA Mode to Active Standby (we have only one EDGE) and change the Fail Over to Preemptive, choose the EDGE cluster you created and the the EDEG node and save.
image2020-4-13_9-30-1.png
  • add the default static route to the next hop. the next hop is the Uplink SW connected to the T1 of the hosted network. click set on the static route under ROUTING section
image2020-4-13_9-11-43.png
image2020-4-13_9-12-12.png
  • Apply and save the configuration.
    • add T0 interface to connect the Uplink Segment. click set interface and add an interface, configure the name, IP Address from the Uplink SW connected to the hosted T1, choose the Edge Node and save the configuration.
image2020-4-13_9-33-17.png
  • Add route distribution to the nested T0
image2020-4-13_9-38-23.png
image2020-4-13_9-14-40.pngimage2020-4-13_9-14-14.png
  • check the connectivity to the Nested T0 by pinging the Uplink port Interface. ping to the Uplink interface of the hosted T1 12.12.13.1 and the Uplink interface of the nested T0 12.12.13.2 you should get reply from both
image2020-4-13_9-36-26.png
  • the configuration of the Overlay and Uplink networks are complete. the networking is ready for vSphere with k8s enablement.
  • configure the Kubernetes Supervisor Cluster
    • assign licenses to the Hosts, vCenter and VSAN
image2020-4-13_9-51-49.png
  • configure the Supervisor cluster by clicking the Workload Management under Menu
image2020-4-13_9-53-6.png
  • Enable and choose the Cluster and click next. if the Cluster isn’t shown in the “Compatible” list, check the configuration according to this document. especially check the DNS configuration, DRS and HA
image2020-4-13_9-55-12.png
  • in the Cluster setting, we can choose the Tiny option as it just for POC or LAB use case
image2020-4-13_9-56-25.png
  • configure the Networking with the parameters we created. choose the VM Network for the Management control plane and start the IP from 12.12.12.100
image2020-4-13_9-58-23.png
  • choose the vDS (it seems blank but you can choose that), EDGE Cluster and configure the Segments for Pod, Node, Ingress and Egress according to the static route and IP pools we configured in the network
image2020-4-13_10-1-42.png
  • choose the Storage policy we created with the Tag for k8s-storage-policy in the next step
image2020-4-13_10-10-57.png
  • review and finish the installation. the cluster will be configured in the background. that can take some time to finish
image2020-4-13_10-11-58.png
  • once the configuration is done you can create the first Namespace for the sup cluster
image2020-4-13_10-57-17.png
  • click Create Namespace and choose the cluster
image2020-4-13_10-58-16.png
  • create a namespace with the name blog and connect to the cluster by getting the information from the Open link in the Status box
image2020-4-13_10-59-56.png
  • follow the steps by downloading the kubectl vSphere plugin and run the command in a linux/windows machine that can connect to the supervisor cluster
    kubectl vsphere login --server=https://12.15.80.1/ --insecure-skip-tls-verify -u administrator@vsphere.local
  • now you can interact with kubectl with the cluster
    localadmin@server:~$ kubectl get ns NAME                      STATUS   AGE blog                      Active   14m default                   Active   49m kube-node-lease           Active   50m kube-public               Active   50m kube-system               Active   50m vmware-system-capw        Active   49m vmware-system-csi         Active   49m vmware-system-kubeimage   Active   49m vmware-system-nsx         Active   49m vmware-system-registry    Active   49m vmware-system-tkg         Active   49m vmware-system-ucs         Active   49m vmware-system-vmop        Active   49m localadmin@server:~$
    localadmin@server:~$ kubectl get nodes NAME                               STATUS   ROLES    AGE   VERSION 421e5216577cdec079fc453e77b737f2   Ready    master   44m   v1.16.7-2+bfe512e5ddaaaa 421e7b33b1dbd5a65096cd03ef26239d   Ready    master   44m   v1.16.7-2+bfe512e5ddaaaa 421ec8b0ea489db816a08083e009e066   Ready    master   50m   v1.16.7-2+bfe512e5ddaaaa openso-pacific-esx-01.lab.local    Ready    agent    28m   v1.16.7-sph-4d52cd1 openso-pacific-esx-02.lab.local    Ready    agent    28m   v1.16.7-sph-4d52cd1 openso-pacific-esx-03.lab.local    Ready    agent    28m   v1.16.7-sph-4d52cd1 openso-pacific-esx-04.lab.local    Ready    agent    28m   v1.16.7-sph-4d52cd1 localadmin@server:~$ 
  • you can pull my git repo to install some sample apps and check the capabilities of the supervisor cluster git clone https://github.com/0pens0/tanzu_demo.git
  • localadmin@server:~$ git clone https://github.com/0pens0/tanzu_demo.gitCloning into 'tanzu_demo'...remote: Enumerating objects: 32, done.remote: Counting objects: 100% (32/32), done.remote: Compressing objects: 100% (26/26), done.remote: Total 32 (delta 12), reused 22 (delta 6), pack-reused 0Unpacking objects: 100% (32/32), done.localadmin@server:~$
  • the blog.yaml will implement the deployment of a blog with persistence data. first, configure the storage policy to the Namespace
  • after the configuration of the storage policy, you can see the Storage Class object that was created in the k8s side
  • localadmin@server:~/tanzu_demo$ kubectl get scNAME                 PROVISIONER              AGEk8s-storage-policy   csi.vsphere.vmware.com   79s
  • if needed you can add permissions from the SSO domain of the vCenter and also quotas for CPU, Memory and Storage limitations.
  • apply the blog.yaml on the blog namespace with the following command and check for pod, svc and pvc statuslocaladmin@server:~/tanzu_demo$ kubectl create -f blog.yaml -n blogbrowse the app with the LB SVC IP
  • the browsing of the app won’t work. by default, the NSX is configuring deny rules between the namespaces and management. add an any any role to enable communication in the distributed FW security section

  • Publish the new rule and browse the app again

**Important message – that’s NOT a supported architecture, that’s just for POC’s. Lab’s. Test’s etc. **


2 thoughts on “vSphere 7.0 with Kubernetes – Nested Setup – Zero to Hero

Leave a reply to OpensO Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.