You can build solid corporate applications made up of numerous microservices using the architecture known as a service mesh on the infrastructure of your choice. This architecture provides managed, observable, and secure communication across your services. Service developers and operators can more easily concentrate on developing and operating top-notch applications for their users because service meshes take care of all the usual concerns associated with running a service, such as monitoring, networking, and security, with standardized, potent tools.
Supported versions
The current version of Anthos Service Mesh and the two preceding (n-2) minor versions are supported by Google. The supported versions of Anthos Service Mesh are listed in the table below, along with the version's earliest end-of-life (EOL) date.
You must update to Anthos Service Mesh v1.12 or later if you use an unsupported software version.
The unsupported Anthos Service Mesh versions and their end-of-life (EOL) dates are shown in the table below.
Security
Certificate distribution/rotation mechanisms
Certificate authority (CA) support
Anthos Service Mesh security features
Authorization policy
Authentication policy
Peer authentication
Request authentication
Observability overview
The Anthos Service Mesh gives you visibility into the functionality and health of your services. Anthos Service Mesh uses sidecar proxies, which you inject into the same pods as your workloads as a distinct container, to collect telemetry data. The proxies intercepted all incoming and outgoing HTTP traffic to the workloads, which then send the information to Anthos Service Mesh. This technology allows service developers to collect telemetry data without instrumenting their code.
Cloud Monitoring and Logging are enabled in your Cloud project when installing Anthos Service Mesh. To report telemetry data, each sidecar proxy injected into your service pods calls the Cloud Monitoring API and Logging API. The telemetry information is automatically uploaded to the console's Anthos Service Mesh pages. On the Anthos Service Mesh pages in the console, keep in mind that metrics are only shown for HTTP services.
On the Anthos Service Mesh pages in the Google Cloud console, you can:
Learn about the service level's traffic, faults, and latency metrics for three of the four golden monitoring signals. This overview of all the services in your mesh will give you this information.
Define, review, and set alerts for your service's performance as measured by its service level objectives (SLOs), which are visible to users.
View statistic charts for specific services and study them in-depth using filtering and breakdowns, such as by response code, protocol, destination pod, traffic source, and more.
Find out precisely what each service's endpoints are capable of, how traffic moves between them, and how well each communication edge is doing.
Investigate a service topology graph visualization that displays the services in your mesh and their connections.
Deploying Services
Deploying Services to clusters with Anthos Service Mesh is almost the same as deploying Services to clusters without Anthos Service Mesh. You do need to make some changes to your Kubernetes manifests:
Create Kubernetes Services for all containers. All deployments should have a Kubernetes Service attached.
Name your Service ports. Although GKE allows you to define unnamed Service ports, Anthos Service Mesh requires that you provide a name for a port that matches the port's protocol.
Label your Deployments. This allows you to use Anthos Service Mesh traffic management features such as splitting traffic between versions of the same service.
The following example Deployment and Service illustrate these requirement,
Create a GKE cluster with Anthos Service Mesh and the gcloud CLI
Install required tools
You can run the tool on your local Linux machine or Cloud shell. Cloud Shell pre-installs all the necessary tools. Note that macOS is not supported because it comes with an old bash version.
Download asmcli
Download the version that installs Anthos Service Mesh 1.14.1 to the current working directory:
Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 167k 100 167k 0 0 701k 0 --:--:-- --:--:-- --:--:-- 701k
Make the script executable:
chmod +x asmcli
Install Anthos Service Mesh
To install Anthos Service Mesh on the previously built cluster, launch the asmcli programme with the following options selected. The values you gave for the gcloud container clusters create command are displayed in the placeholders if you haven't exited this page since creating the cluster.
Deploy an ingress gateway
You can deploy and manage gateways as a component of your service mesh using Anthos Service Mesh. A load balancer operating at the mesh's edge that accepts or rejects HTTP/TCP connections is referred to as a gateway. Envoy proxies known as gateways offer granular control over traffic entering and departing the mesh.
Deploy the Online Boutique sample
The initial manifests in the microservices-demo repository was changed for the Online Boutique sample application in the anthos-service-mesh-packages repository. Best practices dictate that every service be deployed with an individual service account in its namespace.
Exposing and accessing the application
The application can be exposed in a variety of ways. We will use the ingress gateway we set up earlier in this guide. Refer to the section on exposing and accessing the application in the Deploying the Online Boutique sample application guide for additional information on how to expose the Online Boutique application.
View the Service Mesh dashboards
You can explore the Anthos Service Mesh pages in the console to view all of the observability capabilities that Anthos Service Mesh offers after you have deployed workloads on your cluster and injected sidecar proxies. Keep in mind that it takes roughly one or two minutes for telemetry data to appear in the console after you deploy workloads.
Install Anthos Service Mesh
The below steps outlines how to install an Anthos Service Mesh:
To install the in-cluster control plane on a single cluster, run asmcli install. See the given sections for command line examples. The examples contain required and optional arguments that you might find helpful. We suggest that you always specify the output_dir argument so that you can quickly locate sample tools and gateways such as istioctl.
Private GKE clusters requires an extra firewall configuration steps to allow traffic to istiod.
Install an ingress gateway optionally. By default, asmcli doesn't install the istio-ingressgateway. We suggest that you manage and deploy the control plane and gateways separately.
To finish setting up Anthos Service Mesh, you need to enable automatic sidecar injection andredeploy or deploy workloads.
If you are installing Anthos Service Mesh on more than 1 cluster, run asmcli install on each cluster. When you run asmcli install, ensure to use the same FLEET_PROJECT_ID for each cluster. After Anthos Service Mesh is installed, view Setting up a multi-cluster mesh.
If your clusters are on various networks (as they are in island mode) then you should give a unique network name to asmcli using the --network_id flag.
Configure managed Anthos Service Mesh
You can configure Maintained Anthos Service Mesh, which has a control plane that Google manages and an optional data plane. In a backwards-compatible manner, Google takes care of their reliability, upgrades, scaling, and security for you. In a single or multi-cluster configuration, this tutorial covers how to set up or migrate apps to managed Anthos Service Mesh using asmcli.
Download the installation tool
Download the latest version of the tool to the current working directory:
Use the following steps to configure managed Anthos Service Mesh for each cluster in your mesh.
Apply the Google-managed control plane
Run the installation tool for each cluster using managed Anthos Service Mesh. We suggest that you include both options:
--enable-registration --fleet_id FLEET_PROJECT_ID These 2 flags register the cluster to a fleet, where the FLEET_ID is the project-id of the fleet host project. If using a single-project, the PROJECT_ID is the same as FLEET_PROJECT_ID, the cluster projectand the fleet host are the same. We suggest using a separate fleet host project in more complex configurations like multi-project.
--enable-all This flag enables both required registrations and components.
Verify the control plane has been provisioned
Theasmcli tool creates a ControlPlaneRevisioncustom resource in the cluster. This resource's status is updated when the managed control plane is provisioned or fails provisioning.
Inspect the status of the resource. Replace NAME with the value corresponding to each channel: asm-managed, asm-managed-stable, or asm-managed-rapid.
Zero-touch upgrades
Once the Google-managed control plane is installed, Google will upgrade it automtically when new patches or releases become available.
Configure endpoint discovery
Managed Anthos Service Mesh should have been set up on each cluster by the instructions in the previous step before moving on. Since this is the default practise, it is not necessary to indicate that a cluster is a primary cluster. Before configuring endpoint discovery, you must complete the Setting the project and cluster variables and Create firewall rule sections.
Deploy applications
To deploy applications, use either the label corresponding to the channel you configured during installation or istio-injection=enabled if you are using default injection labels.
You can view the version of the control plane and data plane in Metrics Explorer.
You can view the version of the data plane and control plane in Metrics Explorer.
To check that your configuration works correctly:
In the console, see the control plane metrics.
Choose your workspace and include a custom query using the respective parameters:
Resource type: Kubernetes Container
Metric: Proxy Clients
Filter: container_name="cr-REVISION_LABEL"
Group By: revision label and proxy_version label
Aggregator sum
Period: 1 minute
When you run Anthos Service Mesh with an in-cluster control plane and Google-managed, you can segregateseparate the metrics by their container name. For e.g., unmanaged metrics have container_name="discovery", while managed metrics have container_name="cr-asm-managed".
To display metrics from both, remove the Filter on container_name="cr-asm-managed".
Check the control proxy version and plane version by inspecting the following fields in Metrics Explorer:
The revision field shows the control plane version.
The proxy_version field shows the proxy_version.
The value field signifies the number of connected proxies.
Migrate applications to managed Anthos Service Mesh
To migrate to managed Anthos Service Mesh, do the following steps:
Apply the Google-managed control plane section: Run the tool as directed.
To ensure that the workloads are functioning correctly, test your application.
Repeat the previous steps for each if you have workloads in additional namespaces.
Unless there is a reason to restrict that configuration to a subset of clusters alone, repeat the Kubernetes and Istio settings in all clusters if you installed the application in a multi-cluster scenario. The source of truth for a given cluster is the configuration that was applied to it.
Verify control plane metrics procedures should be followed to ensure the metrics are displayed as desired.
Uninstall
Google-managed control plane auto-scales to zero when no namespaces are using it.
The Google Cloud Self-Paced Labs are interactive labs that take place online. These laboratories include a series of guidelines that lead through a real-world, scenario-based use case in real-time.
What is the use of Qwiklabs?
To provide you the opportunity to work on several cloud platforms and gain practical experience, Qwiklabs offers temporary credentials to both Google Cloud Platform and Amazon Web Services.
What is container migration?
The process of moving an application between the different physical machines or clouds without disconnecting the client is known as live container migration.
What is migrate for Anthos?
In Google Kubernetes Engine (GKE) or Anthos, Migrate to Containers is used to convert VM-based workloads into containers.
What does a hybrid cloud mean?
A hybrid cloud combines on-premises, private cloud, and public cloud. It simultaneously makes use of all three resources to support a single application. Hybrid Cloud is one of the deployment methods included in multi-cloud.
Conclusion
In this article, we have extensively discussed the details of Anthos Service Mesh, supported versions, security features, observability overview, deploying services, and creating a GKE cluster along with installing Anthos Service Mesh and configuring managed Anthos Service Mesh.