1. Steps To Create Kubernetes Cluster on HI GIO Portal

1. Steps To Create Kubernetes Cluster on HI GIO Portal

Overview

This document explains creating a Kubernetes cluster on HI GIO, including selecting configurations, deploying nodes, and initializing the control plane.

Procedure

  1. Pre-requisites:

  • Create a network for the cluster with available Static IP Pools.

  • Create firewall and SNAT rules to ensure VMs in the cluster can access the internet.

  • Make sure HI GIO Load Balancing is enabled.

  • Make sure there is at least one available public IP.

2. Procedure:

Step 1: Log in to the HI GIO portal with tenant account > Click More > Kubernetes Container Clusters

ảnh-20241203-075259.png

Step 2: Click NEW and follow the creation steps to complete the creation process to create a new HI GIO Kubernetes cluster.

  • Click NEXT

    ảnh-20241203-080547.png
  • Enter the name of the cluster and select a Kubernetes version > NEXT

ảnh-20241203-080717.png
  • Click NEXT in step 3.

Attaching clusters to Tanzu Mission Control is currently not supported.

ảnh-20241203-080844.png
  • Select oVDC and Network for nodes > NEXT

ảnh-20241203-080946.png
  • In Control Plane window, select the number of nodes, disk size, and optionally select a sizing policy, a placement policy, a storage profile, and click NEXT.

image-20241211-094924.png

Configuration field

Description

Configuration field

Description

Number of Nodes

  • Non-HA: 1

  • HA: 3

Disk Size (GB)

The minimum allowed is 20 GB

Sizing Policy

  • TKG medium: If the number of Worker nodes is less than or equal to 10 nodes.

  • TKG large​: If the number of Worker nodes exceeds 10 nodes.

Placement Policy

Leave blank. We do not apply a placement policy for the HI GIO Kubernetes cluster.

Storage Policy

Select an available storage policy.

  • Configure worker pools setting > NEXT

ảnh-20241203-081944.png

Configuration field

Description

Configuration field

Description

Name

Enter the worker pool name.

Number of Nodes

Enter the number of nodes of the worker pool.

Disk Size (GB)

The minimum allowed is 20 GB

Sizing Policy

  • TKG small: Small VM sizing policy for a Kubernetes cluster node (2 CPU, 4GB memory)

  • TKG medium: Medium VM sizing policy for a Kubernetes cluster node (2 CPU, 8GB memory)

  • TKG large​: Large VM sizing policy for a Kubernetes cluster node (4 CPU, 16GB memory)

  • TKG extra-large: Extra-large VM sizing policy for a Kubernetes cluster node (8 CPU, 32GB memory)

Placement Policy

Leave blank. We do not apply a placement policy for HI GIO Kubernetes cluster.

Storage Policy

Select an available storage policy.

(Optional) To create additional worker node pools, click Add New Worker Node Pool and configure worker node pool settings.

 

  • Configure storage class > NEXT

ảnh-20241203-082202.png

Configuration field

Description

Configuration field

Description

Select a Storage Profile

Select one of the available storage profiles.

Storage Class Name

The name of the default Kubernetes storage class. This field can be any user-specified name with the following constraints based on Kubernetes requirements:

  • Contain a maximum of 63 characters

  • Contain only lowercase alphanumeric characters or hyphens

  • Start with an alphabetic character

  • End with an alphanumeric character

Reclaim Policy

  • Delete policy: This policy deletes the PersistentVolume object when the PersistentVolumeClaim is deleted.

  • Retain policy: This policy does not delete the volume when the PersistentVolumeClaim is deleted; the volume can be reclaimed manually.

Filesystem

  • xfs

  • ext4: This is the default filesystem used for the storage class.

  • Configure Kubernetes network > NEXT

ảnh-20241203-091426.png

Option

Description

Option

Description

Pods CIDR

Specifies a range of IP addresses to use for Kubernetes pods. The default value is 100.96.0.0/11. The pod subnet size must be equal to or larger than /24.

Services CIDR

Specifies a range of IP addresses to use for Kubernetes services. The default value is 100.64.0.0/13.

Control Plane IP

You can specify your own IP address as the control plane endpoint. You can use an external IP from the gateway or an internal IP from a subnet different from the routed IP range.

Virtual IP Subnet

You can specify a subnet CIDR from which one unused IP address is assigned as a Control Plane Endpoint. The subnet must represent a set of addresses in the gateway. The same CIDR is also propagated as the subnet CIDR for the ingress services on the cluster.

You should enter the available public IP into the Control Plane IP

 

  • Enable Auto Repair on Errors and Node Health Check > NEXT

ảnh-20241203-082942.png

Auto Repair on Errors: If errors occur before this cluster becomes available, the CSE Server will automatically attempt to repair the cluster.

Node Health Check: Unhealthy nodes will be remediated after this cluster becomes available according to unhealthy node conditions and remediation rules.

 

  • Review all cluster information and click FINISH to create the cluster.

ảnh-20241203-083625.png

Step 3: Wait until the cluster status is Available, then click DOWNLOAD KUBE CONFIG to download the kubeconfig file

ảnh-20241203-092117.png

Please configure the VPC firewall to allow access to the Control Plane IP using port 6443.

 

End.