Table of contents
1.
Introduction
2.
GKE and Anthos clusters
3.
Migration steps
3.1.
Qualify source workloads
3.2.
Set up Migrate to Containers
3.3.
Migrate Linux workloads
3.4.
Migrate Windows workloads
4.
Advantages of migrating to containers 
4.1.
Migrate to containers versus lift-and-shift to Compute Engine
4.2.
Advantages of containers
5.
Migration journey for GKE, Anthos, and Cloud Run
5.1.
Discovery phase
5.1.1.
Roles
5.2.
Migration planning phase
5.2.1.
Roles
5.3.
Landing zone setup phase
5.3.1.
Roles
5.4.
Migration and deployment phase
5.4.1.
Roles
5.5.
Operate and Optimize phase
6.
Migrating to Containers management interfaces
6.1.
Google Cloud console
6.2.
Command-line interface
6.3.
CRD-based API
7.
Migrating to Autopilot clusters and Cloud Run
7.1.
About Anthos clusters on AWS and workload identity
7.2.
Changes from the existing runtime
7.2.1.
New services-config.yaml artifact file added
7.2.2.
Readiness probes
7.2.3.
syslog support
7.3.
Limitations
7.4.
Updates for version 1.9.0
7.5.
Updates for version 1.8.1
8.
Frequently Asked Questions
8.1.
What is container migration?
8.2.
What is migrate for Anthos?
8.3.
Why are containers better than VM?
9.
Conclusion
Last Updated: Mar 27, 2024

Migrate to Containers

Author Nagendra
0 upvote

Introduction

The Migrate to Containers is used to transform VM-based workloads into containers that run on the Cloud Run platform, Anthos clusters, or Google Kubernetes Engine (GKE). You have the option to easily containerize your current workloads by migrating workloads from virtual machines (VMs) running on VMware, AWS, Azure, or Compute Engine.
This blog explains the specifics of the migration to containers, as well as the specifics of GKE and Anthos clusters, the benefits of the migration, the migration process for GKE, Anthos, and Cloud Run, and the specifics of Migrating to Containers management interfaces and Autopilot clusters and Cloud Run.

 Without further ado, let's get started.

GKE and Anthos clusters

Clusters powered by Google Kubernetes Engine (GKE) offer multi-cluster support, autoscaling, and secured and managed Kubernetes services. GKE, powered by Google Cloud, enables you to launch, manage, and scale containerized applications.

  • Autopilot clusters: GKE maintains the nodes and node pools that make up the cluster's underlying architecture, offering you an efficient cluster with a hands-off experience. To learn more about the advantages of using the streamlined Linux service management, visit Cloud Run and the migration to Autopilot clusters.
     

An application management platform called Anthos offers a unified development and operations environment for on-premises and cloud settings. Anthos has a number of essential parts, including the following:

  • Anthos clusters: An on-premises and cloud-based container orchestration and management service for Kubernetes clusters. To manage Kubernetes deployments in the settings where you plan to deploy your applications, Anthos depends on Anthos clusters on Google Cloud, Anthos clusters on VMware, or Anthos clusters on AWS.
     
  • Anthos Config Management: In order to comply with your organization's security and compliance needs, Anthos Config Management defines, automates, and enforces policies across environments.
     
  • Anthos Service Mesh: Monitors, diagnoses, and enhances application performance while managing and securing traffic between services.
     
  • Anthos security: By supplying uniform controls across your environments, Anthos security secures your hybrid and multi-cloud installations.
     

Let's look at the details of migration steps.

Migration steps

You can move and update your current workloads to containers on a controlled and secure Kubernetes cluster using the Migrate to Containers tool.
The procedures for moving VMs to containers are described in the sections that follow. Each of these subsections builds upon the one before it, so read them in order.

Qualify source workloads

Select the VMware, AWS, Azure, or Compute Engine Linux and Windows virtual machines (VMs) that you want to run as containers on GKE or Anthos:

  • Review ideal planning procedures. Read tips on application migration based on actual application migrations.
     
  • Examine VM operating systems that are compatible.
     
  • A tool is provided by Migrate to Containers that you may use on a VM workload to determine whether the workload is suitable for containerization.

Set up Migrate to Containers

In order to complete the modifications necessary to move a workload from a source VM to a target container, create a processing cluster to run the components of "Migrate to Containers":

  • Installing Migrate to VMs is required for VMware, AWS, and Azure when Google Cloud is the goal in order to make the migration of workloads into Google Cloud easier.
     
  • Installing Anthos clusters on VMware in the vCenter/vSphere environment of the original VMware VM will enable you to transfer the application to run in a container on-premises.
     
  • Installing Anthos clusters on AWS in the area of the source AWS VM will enable you to transfer the application to run in a container on AWS.
     
  • Only migration from Compute Engine virtual machines to Anthos or GKE containers running on Google Cloud is supported for Windows virtual machines. Because of this, it is necessary to use Move to VMs to first migrate Windows VMs from other sources to Compute Engine VMs.

Migrate Linux workloads

Your Linux workloads should be converted to containers before being deployed to an Anthos cluster on AWS version 1.4 or later, an Anthos cluster on VMware or a GKE cluster on Google Cloud.

Migrate Windows workloads

Windows workloads should be converted to containers, which should subsequently be deployed to an Anthos cluster on AWS version 1.4 or later, an Anthos cluster on VMware or a GKE cluster on Google Cloud.

Let's look at the details of its advantages.

Advantages of migrating to containers 

In order to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, Anthos, or Cloud Run, existing VM-based apps can be containerized using the tool known as "Migrate to Containers." Migrate to Containers offers a quick and easy solution to switch to modern orchestration and application management without needing access to source code, rewriting, or re-architecting apps by utilising the GKE and Anthos ecosystems.

Migrate to containers versus lift-and-shift to Compute Engine

Utilizing the Migrate to VMs tool, you can now migrate VM workloads into Compute Engine VM instances. As it continually maintains the same operational paradigm used for running and controlling apps on-prem, upgrading only the underlying infrastructure, this "lift & shift" strategy offers the most straightforward manner of cloud migration.
Even if "lift & shift" is an effective solution for some workloads, many clients who are migrating to the cloud want to go a step further by utilising cloud-native tools, techniques, and managed services. They want to manage their workloads on GKE or Anthos, and they want to switch from virtual machines to containers.

Advantages of containers

You can modernise application workloads by transferring them to containers with Migrate to Containers. The following are some major advantages of containerizing workloads:

  • Density: Since containers don't contain an operating system, they are substantially lighter than virtual machines (VMs) and need much less memory and processing power. With containers, you can distribute workloads more densely across your clusters, allocate resources more precisely, and pay less for infrastructure overall.
     
  • Node kernel with security optimizations: You are relieved of the responsibility of operating system maintenance thanks to GKE and Anthos' automatic operating system upgrades.
     
  • Augment legacy apps with modern services: You can leverage platform add-on services from GKE and Anthos to seamlessly integrate new features with current apps. For example, you can use Anthos Service Mesh or Istio on GKE to automate network and security settings without modifying the code of your application. Additionally, you can leverage cloud-based logging and monitoring by simply altering parameters rather than your applications.
     
  • Integrated resource management and unified policy: You can concentrate on managing apps rather than infrastructure with GKE and Anthos. They provide strong tagging schemes and selector policies along with the power of declarative desired-state management.
     
  • Modern coordination and administration of images: In order to modernise your application life-cycle and operations management, including integrating with a CI/CD pipeline using tools like Cloud Build to implement day-2 maintenance procedures, you can modernise your application life-cycle and operations management using the unique capability offered by Migrate to Containers. Additionally, image-based administration enables users to execute self-healing, rolling upgrades, and dynamic scalability.
     

Lets look at the details of Migration journey for GKE, Anthos, and Cloud Run.

Migration journey for GKE, Anthos, and Cloud Run

You must proceed through the following stages at a high level:

  • Discovery phase: The discovery phase is where you figure out what workloads you have, how they are related, and whether you can move them to containers.
     
  • Migration planning phase: In the phase of migration planning, you divide your fleet of workloads into groups of workloads that are connected and ought to move together (depending on the results of the assessment), and you then choose the sequence in which the groups should be migrated.
     
  • Landing zone phase: Establishing the deployment environment for the relocated containers during the landing zone phase.
     
  • Migration and deployment phase: You containerize your VM workloads during the migration and deployment phase and then deploy and test the containers.
     
  • Operate and optimize: Utilize Anthos and the greater Kubernetes ecosystem's tools to operate and optimise your containerized workloads.
     

Lets dive into details of each of them.

Discovery phase

By comprehending your applications and their dependencies, you may gather the data required for migration during the discovery phase.

This data contains an inventory of:

  • The virtual machines (VMs) whose workload you want to move.
     
  • The network connections and ports are needed by your apps.
     
  • Dependencies between app tiers.
     
  • DNS setup or service name resolution.
     

As you migrate to containers, be sure to consider how well the source workload and operating system operate together.
A tool is provided by Migrate to Containers that you may use on a VM workload to determine whether the workload is suitable for containerization.

Roles

  • IT analyst who is familiar with the migration and topologies of the program.

Migration planning phase

Organize your applications into batches and convert the data gathered during the discovery phase into the Kubernetes model as part of the migration planning phase.
Your Kubernetes YAML configuration files, each of which contains a Kubernetes Custom Resource Definitions, serve as a central repository for your application environments, topologies, namespaces, and rules (CRDs).

Roles

  • Analyst or application migration engineer: This person should have a basic understanding of YAML files, GKE deployments, and the Kubernetes managed object paradigm.

Landing zone setup phase

You set up the deployment environment for the migrated containers during the landing zone creation phase.

This stage includes the following steps:

  • Make a GKE or Anthos cluster, or choose one to host your migrated workloads. This deployment cluster may be an Anthos cluster running on VMware, an Anthos cluster running on Google Cloud, or an Anthos cluster running on AWS at version 1.4 or later.
     
  • For your applications, create Kubernetes network policies and VPC network rules.
     
  • Kubernetes service definitions should be used.
     
  • Choose a load-balancing strategy.
     
  • Set up the DNS.

Roles

  • Cluster administrator with experience in cluster deployment, Google Cloud networking, firewall configurations, Identity and Access Management service accounts, and Google Cloud Marketplace installations.

Migration and deployment phase

The following steps are included in the migration and deployment workflow:

  • Set up the computing cluster: To launch the Migrate to Containers components that perform the conversion from a source VM to the target container, set up a GKE or Anthos processing cluster.
     
  • Add a source of migration: The source platform from which you will migrate should be added as a migration source.
  • Make a migration strategy: Create the migration plan, then evaluate and adjust it before carrying it out.
     
  • Review and customize the migration plan: Review and revise the migration strategy with input from important stakeholders, including the owner of the application, the security administrator, the storage administrator, etc.
     
  • Generate artifacts: Process the source VM using the migration plan as input to create the pertinent container artifacts:
    • The supplied migration flows for the various workloads each have different specifications for the generate artifacts step, which produces artifacts that may be used to deploy the migrated workload.
       
  • Integrate or deploy with CI/CD: You can now proceed with deployment in a test, staging, or production cluster since the container artifacts are prepared. As an alternative, you can utilise an orchestration platform like Cloud Build to combine the artifacts with a build and deploy pipeline.
     
  • Test: Check to see that the extracted container image and data volume operate properly when run inside a container. You might do a "sanity test" on the processing cluster, spot any problems or changes that need to be made to the migration plan, repeat the migration, and test it once more.

Roles

  • For workload migration:
    • Application owner or analyst for workload migration with basic familiarity with GKE deployments, the Kubernetes managed object paradigm and YAML editing.
       
  • OPTIONAL: To migrate data storage to a persistent volume other than a persistent disc in the Google Cloud:
    • GKE administrator or storage administrator who is knowledgeable about Kubernetes persistent volume management

Operate and Optimize phase

Utilize the capabilities offered by Anthos and the greater Kubernetes ecosystem throughout the operation and optimise phase.
In this stage, instead of rebuilding your applications, you can add access restrictions, encryption, and authentication using Istio, as well as monitoring and logging using Cloud Logging and Cloud Monitoring. Using solutions like Cloud Build, you can link with a CI/CD pipeline to carry out day-2 maintenance tasks like software package and version changes.

Let's dive into the details of Migrating to Containers management interfaces.

Migrating to Containers management interfaces

Three fundamental methods of interacting with the services and resources required to carry out migrations are provided by Migrate to Containers:

  • Google Cloud console
     
  • Command-line interface
     
  • CRD-based API
     

Let's dive into the details of each of them.

Google Cloud console

You may manage your Google Cloud console (GCP) projects and resources using the web-based, graphical user interface provided by the Google Cloud console. When using the console, you select an existing project or start a new one, then use the resources you create within that project.
You can create many projects, so you can use them however you see fit to divide your work. For instance, if you want to ensure that only specific team members can access the resources in that project while all team members can still access resources in another project, you might launch a new project.

You can perform the following steps from Google Cloud Console: 

  • Create a GKE cluster on-premises or in the Google Cloud.
     
  • Add a source for migration.
     
  • Create a migration
     
  • Execute a migration
     
  • View migration logs and keep track of a migration
     

To get to the Google Cloud console, Migrate to Containers:

  • Activate the console.
     
  • You may access the page that lets you migrate to containers either way:
    • Use the left navigation menu to select Kubernetes Engine > Migrate to containers.
       
    • Choose Anthos > Migrate to containers from the menu on the left.
       

Command-line interface

To operate in a terminal window.

  • Providing the Google Cloud CLI is the Google Cloud CLI. To manage both your development process and your GCP resources, use gcloud.
     
  • Using the command-line tool migctl for Migrate to Containers, create a migration plan. Then, with feedback from important stakeholders like the owner of the application, the security administrator, the storage administrator, etc., evaluate and amend the plan.
     

In addition, GCP offers Cloud Shell, an interactive, browser-based shell environment.

Cloud Shell gives you:

  • A Temporary instance of a virtual machine on Compute Engine.
     
  • A web browser to access the instance's command line.
     
  • An integrated code editor.
     
  • 5 GB of disc storage.
     
  • Tools and the Google Cloud CLI are already installed.
     
  • Java, Go, Python, Node.js, PHP, Ruby, and.NET language support.
     
  • Web preview capabilities.
     
  • Access to GCP Console projects and resources is pre-authorized.

CRD-based API

Custom Resource Definitions (CRDs), which are included in Migrate to Containers, make it simple to construct and manage migrations with an API automation tool or piece of code. For instance, you can create your automated tools using these CRDs.

Let's look at the details of Migrating to Autopilot clusters and Cloud Run

Migrating to Autopilot clusters and Cloud Run

The Sysv init and systemd was used by the original Linux service manager for Migrate to Containers. It is replaced with a more straightforward, container-friendly alternative by the simplified Linux service manager.

Your migrated container workloads can be deployed to the following locations with the help of this streamlined Linux service manager:

  • Autopilot GKE clusters
     
  • Run Cloud
     

Compatibility difficulties with Kubernetes plugins are also resolved by the simplified Linux service management. For instance, the streamlined Linux service manager does away with the need to build privileged containers, specifies a hostpath for /sys/fs/cgroup in the deployment_spec.yaml file, and make configuration adjustments when deploying containers to Anthos clusters on AWS that employ workload identity.

About Anthos clusters on AWS and workload identity

You can link Kubernetes service accounts to AWS IAM accounts with particular permissions using the workload identity feature for Anthos clusters on AWS. To prevent unauthorised access to cloud resources, workload identity makes use of AWS IAM permissions.
Your migrated workloads can be deployed to Anthos clusters on AWS that use workload identification with the current runtime. However, you must take further actions to specify the following environment variables for your particular init system in order to configure your deployment environment:

  • AWS_ROLE_ARN: The IAM role's Amazon Resource Name (ARN).
     
  • AWS_WEB_IDENTITY_TOKEN_FILE: The file name for the token's storage location
     

You may deploy your containers without carrying out this additional configuration, thanks to the streamlined Linux service manager.

Changes from the existing runtime

You need to be aware of the following differences and restrictions from the current runtime in order to utilise the simplified Linux service management.

New services-config.yaml artifact file added

When you build the migration artifacts with Migrate to Containers, a new file called services-config.yaml is created if you enable the simplified Linux service manager. Control application initialization on a deployed container by using this file.

Readiness probes

Migrate to Containers adds a readiness probe to the deployment_spec.yaml file when utilising the current runtime. No readiness probe is added when the streamlined Linux service manager is enabled.

We advise using an HTTP readiness probe if you wish to add a readiness probe.

Code:

readinessProbe:
          exec:
            command:
            - /.m4a/gamma status

syslog support

To support the syslog, the condensed Linux service manager generates a Unix socket at /dev/log. These log messages are forwarded to stdout by the simplified Linux service management so that Kubernetes can store them as container logs.

Limitations

The streamlined Linux service management has the following restrictions, which you should be aware of.

Be careful of the following restrictions if you use systemd as your init system:

  • Simple, exec, and notify systemd service types are all considered as exec services. In other words, if exec succeeds, the service is regarded as launched.
     
  • Only READY=1 messages for sd notify() are supported with notification sockets.
     
  • If more preparation checks are required, you can offer them. An HTTP check or another form of check, for instance.
     
  • Unit files of the socket type are not supported. Environment variables are not created, nor are sockets.

Updates for version 1.9.0

The following improvements have been made to the streamlined Linux service manager:

  • The Linux service manager is no longer under Public Preview; it is now available to all users.
     
  • Existing container workload conversion to enable Autopilot now follows a different process. For an existing migration, you must now convert it by editing both the deployment-spec.yaml and the Dockerfile.
     
  • Services-config.yaml is the new name for the config.yaml file.

Updates for version 1.8.1

Version 1.8 of Migrate to Containers included a Public Preview version of the condensed Linux service manager. The following updates are included in the simplified Linux service management for version 1.8.1:

  • To enable the streamlined Linux service manager, you no longer need to set an annotation in the migration plan. Now, set v2kServiceManager instead. 
     
  • The environment variable HC_V2K_SERVICE_MANAGER has replaced HC_GAMMA_RUNTIME.
     
  • The services-config.yaml file now has prestart and poststart entries that are automatically filled in. To learn more, see Using services-config.yaml.
     
  • You can now set environment variables at the global or application level using the services-config.yaml file. To learn more, see Using services-config.yaml.
     
  • Support for customising log data written to Cloud Logging has been added.

Also read, kubernetes interview questions

Frequently Asked Questions

What is container migration?

The process of moving an application between the different physical machines or clouds without disconnecting the client is known as live container migration.

What is migrate for Anthos?

In Google Kubernetes Engine (GKE) or Anthos, Migrate to Containers is used to convert VM-based workloads into containers.

Why are containers better than VM?

Due to the fact that containers' images are measured in megabytes as opposed to gigabytes, they are better than virtual machines. When deployed, operated, and managed, containers use fewer IT resources.

Conclusion

In this article, we have extensively discussed the details of Migrate to Containers along with the details of GKE and Anthos clusters, Advantages of migrating to containers, Migration journey for GKE, Anthos, and Cloud Run, Migrating to Containers management interfaces, and Autopilot clusters and Cloud Run.

We hope that this blog has helped you enhance your knowledge regarding Migrate to Containers, and if you would like to learn more, check out our articles on Google Cloud Certification. You can refer to our guided paths on the Coding Ninjas Studio platform to learn more about DSADBMSCompetitive ProgrammingPythonJavaJavaScript, etc. To practice and improve yourself in the interview, you can also check out Top 100 SQL problemsInterview experienceCoding interview questions, and the Ultimate guide path for interviews. Do upvote our blog to help other ninjas grow. Happy Coding!!

Live masterclass