Table of contents
1.
Introduction
2.
Installation
2.1.
Scaling User Clusters
3.
Using Anthos Clusters on AWS
3.1.
Ingress
3.2.
Create load balancer
4.
Authentication with OIDC
5.
Storage 
6.
Security
7.
Frequently Asked Questions
7.1.
How does ingress for Anthos work?
7.2.
How to upgrade AWSCluster and AWSNodePools?
7.3.
What is the difference between OAuth and OpenID Connect?
8.
Conclusion
Last Updated: Mar 27, 2024

Anthos Clusters on AWS

Author Yashesvinee V
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Anthos clusters on a public cloud allow users to manage GKE clusters on another infrastructure through the Anthos Multi-Cloud API. Using Connect, it is possible to manage GKE clusters on both Google Cloud and AWS/Azure from the Google Cloud console. When we create a cluster with Anthos clusters on AWS, it establishes the required AWS resources and brings up a cluster. Anthos clusters on AWS help provision and operate Kubernetes clusters using a Google Cloud account. They use Google Cloud APIs to avail the resources needed by a cluster, including virtual machines, Auto Scaling groups, security groups, and load balancers.

Installation

Installing Anthos clusters on AWS requires an environment where one can install and run various tools. An AWS account with command-line access and two AWS Key Management Service (KMS) keys in the same region as the user clusters are to be kept ready. 

Download and install AWS CLI and configure the AWS IAM credentials and AWS region. Enabling Anthos API for the current Google project is a must. Anthos clusters on AWS require a Terraform version of v0.14.3 and a kubectl version of 1.17 or higher.

A management service is also required to create, update, and delete Anthos clusters on AWS (GKE on AWS) clusters. After installing a management service, create user clusters to run some workloads. Installing Anthos Config Management is an add-on to create a common configuration, including custom policies, across the entire infrastructure.

Scaling User Clusters

Workloads on Anthos clusters on AWS can be scaled automatically or manually by configuring the AWSNodePools. Anthos clusters on AWS implement the Kubernetes Cluster Autoscaler that automatically resizes the number of nodes in a given node pool based on the demands of the workloads. When demand on the nodes is high, Cluster Autoscaler adds nodes to the node pool. When demand is low, Cluster Autoscaler scales down to a specified minimum size. This can increase the availability of workloads and control costs. There is no need to add or remove nodes or over-provision the node pools manually. 

AWSNodePools have many field specifications. Cluster Autoscaler can be disabled by setting spec.minNodeCount to equal spec.maxNodeCount. The minNodeCount and maxNodeCount fields declare the minimum and a maximum number of worker nodes in the pool.

Using Anthos Clusters on AWS

Users can enable Ingress and create load balancers for applications and networks for Anthos clusters on AWS.

Ingress

Anthos Service Mesh manages authentication and authorisation between services with little or no application changes. Users can control traffic flows and API calls between services with complete visibility.

Kubernetes Ingress is an API object that manages external access to the services in a cluster. Ingress for Anthos is a Google cloud-hosted multi-cluster ingress controller for GKE clusters in Anthos. It can provide load balancing, SSL termination and virtual hosting services. A Gateway offers more customisation and flexibility than Ingress, allowing features such as monitoring and route rules to be applied to the cluster’s traffic. Anthos Service Mesh comes pre-installed with Ingress Gateway. 

Create load balancer

A load balancer shares the load from user traffic across multiple instances of applications to reduce the risk that they will experience performance issues. We can set up an AWS Elastic Load Balancer (ELB) with Anthos clusters on AWS. Anthos clusters on AWS create an external (in a public subnet) or internal (in a private subnet) load balancer depending on the LoadBalancer resource. External load balancers are accessible by the IP addresses allowed in the node pool's security groups and the subnet's network ACLs. Anthos clusters on AWS controller can create a Classic Load Balancer or Network Load Balancer on AWS.

After installing a management service and creating a user cluster, we can create a load balancer by creating a deployment and exposing it to a service. 

Step 1: Save a YAML file for the deployment.

apiVersion:
kind: Deployment
metadata:
  name:
spec:
  selector:
    matchLabels:
      app:
      department:
  replicas:
  template:
    metadata:
      labels:
        app:
        department:
    spec:
      containers:
      - name:
        image:
        env:
        - name: "PORT"
          value: 

Step 2: Create the deployment.

env HTTPS_PROXY=http://localhost:8118 \
  kubectl apply -f my-deployment-50001.yaml

Step 3: Create a LoadBalancer Service for the deployment. Specify if it is a Classic or Network ELB on a public or private subnet in another YAML file.

Step 4: Create the service.

env HTTPS_PROXY=http://localhost:8118 \
  kubectl apply -f my-lb-service.yaml

Authentication with OIDC

OIDC refers to OpenID Connect. It is a simple identity layer on top of the OAuth 2.0 protocol. Clients can verify the identity of the End-User based on the authentication performed by an Authorisation Server. Anthos clusters on AWS to use OpenID Connect to authenticate user clusters while interacting with a Kubernetes API server. With OIDC, we can manage access to Kubernetes clusters by using the standard procedures for creating and enabling or disabling user accounts. A typical OIDC login flow follows:

  • A user signs in to an OpenID provider using a username and password.
     
  • The OpenID provider responds and issues an ID token for the user.
     
  • The gcloud CLI sends an HTTPS request including the user's ID token in the request header to the Kubernetes API server.
     
  • The Kubernetes API server verifies the token using the provider's certificate.
     

Run the “gcloud anthos auth login” command to authenticate with clusters. To use the gcloud CLI, the OIDC ID tokens must be stored in the kubeconfig file. Tokens can be added to this file using the “gcloud anthos create-login-config” command. 

Three primary personas can be configured.

  • An Organisation administrator chooses an OpenID provider and registers client applications with the provider.
     
  • A Cluster administrator creates one or more user clusters authentication configuration files for developers using the clusters.
     
  • A Developer runs workloads on one or more clusters and is authenticated using OIDC.
     

For Organisations Administrators, the first step is to choose an OpenID provider from a list of certified providers. Next, create a redirection URL to return ID tokens. Redirect URLs must be created for both the gcloud CLI and the console. Register the client Applications with the OpenID provider and create a client ID and client secret for authentication. Create a custom scope and claim to request and return the user’s security groups. 

The user cluster's AWSCluster resource has to be configured for OIDC authentication for Cluster Administrators. After creating a user cluster, we need to generate a configuration file for the cluster to create a login-config. The config file will be distributed to users who need to authenticate the user cluster. The final step is to configure gcloud to access the cluster. This requires anthos-auth components in the CLI and a login-config.

Storage 

Users can create persistent storage for workloads running on Anthos clusters on AWSKubernetes. PersistentVolume, PersistentVolumeClaim, and StorageClass resources provide persistent files and block storage to workloads. User clusters have a default Kubernetes StorageClass that dynamically provisions storage for workloads on AWS Elastic Block Storage volumes. They also have a default Kubernetes VolumeSnapshotClass that creates snapshots of stateful storage on AWS Elastic Block Storage volumes. We can create a new StorageClass in a cluster with a storage driver to provision storage volumes that do not default. We can then set the StorageClass as the cluster's default or configure the workloads to use the StorageClass. Anthos clusters on AWS support Elastic Block Storage and Elastic File System for workloads.

Security

Anthos clusters on AWS offer various security features to help secure workloads, the contents of container images, the container runtime, the cluster network, and access to a cluster API server. Anthos clusters on AWS user clusters can be authenticated using anthos-gke, OIDC or a Kubernetes Service Account token.

For granular access at the cluster level, we can use Kubernetes Role-based access control. This allows users to create detailed policies defining which operations and resource service accounts can access. We can control access for any validated identity provided. Cluster authentication in Anthos clusters on AWS is handled using certificates and service account bearer tokens. 

Users can encrypt sensitive data in user clusters using Kubernetes Secrets or the Hashicorp Vault. Kubernetes Secrets store passwords, OAuth tokens, and SSH keys within the clusters. This is more secure than storing them in plaintext and reduces the risk of exposing the data to unauthorised users. Hashicorp Vault is used to secure Secrets on the user clusters.

Frequently Asked Questions

How does ingress for Anthos work?

Ingress for Anthos deploys shared load balancing resources across clusters and various regions, thus enabling users to use the same load balancer with an anycast IP for applications in a multi-cluster and multi-region topology. This allows users to gather multiple GKE clusters in different regions under one Load Balancer. It’s a controller for the external HTTP load balancer to provide ingress for traffic from the internet across one or more clusters.

How to upgrade AWSCluster and AWSNodePools?

AWSCluster can be upgraded to a new version of Anthos clusters on AWS without updating the AWSNodePools. However, we cannot update an AWSNodePool to a version higher than the AWSCluster. AWSClusters have to be upgraded before AWSNodePools. AWSNodePools version must be no less than two minor versions behind the AWSCluster version.

What is the difference between OAuth and OpenID Connect?

OAuth 2.0 is a framework that controls authorisation to a protected resource, while OpenID Connect is an industry standard for federated authentication. OpenID Connect is built on the OAuth 2.0 protocol and uses an additional ID token.

Conclusion

This blog discusses Anthos Clusters on AWS. It explains the various actions performed on Anthos clusters, including upgrading, storage, authentication and security. It also talks about creating load balancers and scaling user clusters.

Check out our articles on Amazon EKSKubernetes Interview QuestionsOAuth authentication and Cluster ComputingExplore our Library on Coding Ninjas Studio to gain knowledge on Data Structures and Algorithms, Machine Learning, Deep Learning, Cloud Computing and many more! Test your coding skills by solving our test series and participating in the contests hosted on Coding Ninjas Studio! 

Looking for questions from tech giants like Amazon, Microsoft, Uber, etc.? Look at the problems, interview experiences, and interview bundle for placement preparations.

Upvote our blogs if you find them insightful and engaging! Happy Coding!

Live masterclass