The Google Distributed Cloud Virtual (GDC Virtual) software Anthos clusters on bare metal allows you to run Google Kubernetes Engine (GKE) in your on-premises data centers. You can build, maintain, and upgrade Kubernetes clusters on your hardware in your data center with Anthos clusters on bare metal. You may operate Kubernetes clusters directly on your machine resources with flexibility, performance, and security in your environment using Anthos clusters on bare metal.
Anthos clusters on bare metal
Four different types of clusters can be defined using Anthos clusters on bare metal:
An admin cluster oversees user clusters.
A cluster that runs workloads is called a user.
Standalone: A single cluster that can execute workloads and manage its administration but cannot make or manage other user clusters.
Hybrid: A hybrid is a single cluster that can control user clusters and serve as both an admin and workload cluster.
Prerequisites for Anthos clusters on bare metal
We must do the following before creating a cluster in Anthos clusters on bare metal:
Initialize a project in the Google Cloud: Create a new Google Cloud project for this quickstart that groups all of your Google Cloud resources. You need a Google Cloud project with one of the following roles on your account to construct an Anthos cluster on bare metal: Editor and Owner.
Set up the Google Cloud CLI: The utility kubectl and bmctl are used in this quickstart to build and configure clusters. Gcloud and gsutil are required to install these utilities. The command-line tools gcloud, gsutil, and kubectl are all part of the Google Cloud CLI.
Create a Linux administrative workstation: Configure a Linux admin workstation once gcloud, gsutil, and kubectl have been installed. Do not use Cloud Shell as your admin workstation.
Install bmctl: Anthos clusters are built using the bmctl command-line program on bare metal. For your project, the bmctl command automatically creates the Google service accounts and activates the APIs required to run Anthos clusters on bare metal.
Before you use bmctl to create clusters, see Enabling Google services and service accounts if you prefer to build your service accounts or do other manual project setup steps.
Why choose Anthos clusters on bare metal?
Bring Your Own Node (BYON): The most acceptable performance and flexibility are provided by Anthos clusters on bare metal, which enables you to install applications directly on your hardware infrastructure.
Enhanced Efficiency and Reduced expense: For more effective operation across existing corporate data centers and at the edge of the network, Anthos clusters handle application deployment and health on the bare metal. This allows analytical apps to operate at their peak performance.
Compatible Security: You can tailor your network, hardware, and apps to fit your unique requirements since you control your node environment.
Monitored Application Deployment: Anthos clusters running on bare metal offer advanced environmental performance and health monitoring.
Network flexibility: Your network can be tuned for low latency since you manage the network requirements.
Control and design that is secure and Easy to Deployment: With few connections to external resources, you can alter your infrastructure security to your specific requirements. Most significantly, deploying security systems doesn't involve any extra VM complexity.
Installation pre-flight inspections: Anthos clusters on bare metal are versatile in your environment since they run on open source and business Linux systems, along with a small amount of hardware infrastructure.
Deploy applications and load balancing: Anthos clusters on bare metal feature Layer 4 and Layer 7 load balancing techniques at cluster construction.
Increased dependability of etcd: Anthos clusters on bare metal control planes contain an etcddefrag pod to reclaim storage from big etcd databases and recover etcd when disc capacity is exceeded to monitor the size and fragmented etcd databases.
Configuring hardware for Anthos clusters on bare metal
A wide range of systems running on the hardware that Anthos clusters on the bare metal support the target operating system distributions. An Anthos cluster can run on numerous servers or on very little hardware in a bare metal arrangement to provide flexibility, availability, and performance.
Minimum requirements:
You can build various cluster types when installing Anthos clusters on bare metal:
A user cluster that does tasks.
A cluster used by administrators to build and manage user clusters that run workloads
A solitary cluster is a single cluster capable of managing and running workloads but not building or managing user clusters.
A hybrid cluster can build and administer additional user clusters while managing and running workloads.
You can select from the following installation profiles depending on the cluster type and the number of resources needed:
Default: You can use the default profile for all cluster types, and it has standard system resource requirements.
Edge: The system resource requirements for the edge profile are significantly decreased.
Choosing deployment models
Anthos clusters on the bare metal support multiple deployment strategies to accommodate varying requirements for resource footprint, availability, and isolation.
User clusters
Your containerized workloads are executed in a Kubernetes cluster called a user cluster. It consists of worker nodes and control plane nodes. Bare metal Anthos clusters can support one or more user clusters. Worker nodes that run user workloads must be present in either one or more user clusters.
Admin clusters
A Kubernetes cluster that oversees one or more user clusters is known as an admin cluster. Your admin cluster can carry out the following tasks: Construct user clusters, Boost user clusters, Refresh user clusters, and Get rid of user clusters.
Deployment models
To accommodate various needs, Anthos clusters on bare metal provide the following deployment models:
Standalone cluster deployment
A single cluster acts as a user and administrative cluster in this deployment approach.
The benefits of this model include:
There is no need for a separate admin cluster.
Three nodes are saved in a HA arrangement.
Due to the workloads running on a cluster containing the sensitive data, this paradigm has the following security tradeoffs:
SSH passwords
Keys to your Google Cloud service account
The following scenarios lend themselves well to this model:
Each cluster is controlled independently using unique SSH keys and Google Cloud login information.
Clusters operate in segregated network segments, much like demilitarised zones (DMZs)
Clusters operate in edge regions.
Multi-cluster deployment
Use this deployment strategy for more extensive deployments that require separation between several teams or between development and production workloads, as well as if you have a fleet of clusters in the same data center that you wish to administer from a single location.
The clusters in this deployment model are as follows:
One admin cluster: The main administration hub that offers an API to control user clusters. Your admin cluster operates only management-related software.
A cluster of one or more users: contain the worker nodes that handle user workloads and the control plane nodes.
Hybrid cluster deployment
A specific multi-cluster deployment is used in this deployment approach. Run user workloads on your admin cluster using this paradigm. Additional user clusters are still under your admin cluster's control.
The following criteria are satisfied by this model:
It makes it possible for user workloads to reuse control plane nodes.
Running user workloads on your admin cluster, which houses sensitive data, poses no security risks.
Manage clusters with the Anthos UI
Connect employs a deployment called Connect Agent to establish a link between your Anthos clusters and your Google Cloud project and to manage Kubernetes requests once you install Anthos clusters on bare metal.
You can connect any Kubernetes cluster you have to Google Cloud using Connect. This makes it possible to connect with your cluster using a standard user interface or console and tools for workload control.
Managing clusters in the console
No matter where your Kubernetes clusters are running, the console provides a centralized user interface for managing all of their resources. One dashboard displays all of your resources, and it is simple to see your workloads across different Kubernetes clusters.
Authentication
To access a cluster through the Google Cloud dashboard, your registered clusters must be configured with one of the following authentication types:
Google identity: With this option, users can log in using their Google Cloud identity.
OpenID Connect (OIDC): You can use this to log in to the cluster from the console if your cluster is set up to use an OIDC identity provider.
Bearer token: If none of the Google-provided options above work for your company, you may establish authentication by creating a Kubernetes service account and logging in with that account's bearer token.
Authorization
The cluster's API server runs authorization checks on the identity you provide for Google Cloud interface authentication.
The following Kubernetes RBAC roles must be held by all accounts logging into the cluster at the very least: view and node-reader
These roles provide information about a cluster's nodes and read-only access. Some Google Cloud console features might not be accessible because the roles do not grant access to all resources; for example, they do not grant access to Kubernetes Secrets or Pod logs.
Managing a company's infrastructure on a single public cloud is frequently insufficient to achieve long-term business goals. As a result, a different strategy is employed in which different workloads are divided among multiple cloud vendors (such as Google, AWS, and Azure) and controlled by the same platform. This flexible viewpoint A flexible approach to cloud design is multi-Cloud.
What does a hybrid cloud mean?
A hybrid cloud combines on-premises, private cloud, and public cloud. It simultaneously makes use of all three resources to support a single application. Hybrid Cloud is one of the deployment methods included in multi-cloud.
How may data be moved to the cloud to lower IT costs?
When moving workloads to the cloud, the following two costs should be taken into account:
The full ownership cost (TCO) and Migration Fees. A thorough understanding of TCO guarantees that only the best resources are transferred to the cloud. While a detailed analysis of the migration costs identifies the difficulty of the change and builds a business case for budget approvals.
Conclusion
To conclude this blog, we discussed Anthos clusters on bare metal, their prerequisites, and why choose them. Then we discussed Configuring hardware for Anthos clusters on bare metal, choosing from various deployment models, managing clusters, and identifying them.