Vertex AI unifies the AutoML and AI Platform APIs, client libraries, and user interfaces. While AI Platform training allows you to execute custom training code, AutoML enables you to train models on picture, tabular, text, and video datasets without writing any code. Both bespoke training and autoML training are possibilities accessible with Vertex AI. Vertex AI allows you to deploy models, save models, and ask for predictions regardless of the training method you select.
For more information about Vertex AI, let's dive into the article.
Position of Vertex AI in the ML workflow
Vertex AI can be used to control the following ML workflow phases:
Make a dataset, then upload data to it.
ML model training using your data
Educate the model
Test the model's precision.
adjust the hyperparameters (custom training only)
Vertex AI will store and upload your model.
Use your trained model to serve predictions from an endpoint.
Prediction requests should be sent to your endpoint.
In your endpoint, specify a prediction traffic split.
Manage your endpoints and models.
Components of Vertex AI
The components of Vertex AI are discussed in this section, along with the primary function of each component.
Model training
You can use AutoML to train models on Vertex AI, or you can utilize bespoke training if you require the more extensive customization possibilities offered by AI Platform Training. You can choose from various machine types, allow distributed training, employ hyperparameter tuning, and accelerate with GPUs when using custom training.
Model deployment for prediction
Whether or not the model was trained on Vertex AI, you can deploy it there and receive an endpoint to offer predictions.
Vertex AI Data Labeling
Requesting human labeling for a dataset you want to utilize to train a unique machine learning model is possible with data labeling jobs. You can ask for your text, image, or video data to be labeled. A representative sample of labeled data, a list of all possible labels for your dataset, and specific application instructions are required when submitting a labeling request. When the labeling request is finished, you receive your annotated dataset that you can use to train a machine learning model. The human laborers adhere to your directions.
Vertex AI Feature Store
You may ingest, serve, and distribute ML feature values within your company using the Vertex AI Feature Store, a fully managed repository. All of the supporting infrastructures are managed by Vertex AI Feature Store. For instance, it gives you access to storage and computing resources and is simple to scale.
Vertex AI Workbench
The complete data science workflow is supported by Vertex AI Workbench, a Jupyter notebook-based development environment. Without leaving the JupyterLab interface, Vertex AI Workbench enables you to access data, process it in a Dataproc cluster, train a model, publish your findings, and more.
Tools to interact with Vertex AI
The methods for interacting with Vertex AI are described in this section.
The Google Cloud console
On the console, you can manage your datasets, models, endpoints, and jobs in addition to deploying models to the cloud. This option offers a user interface for managing your machine learning resources. Your Vertex AI resources have access to helpful programs like Cloud Logging and Cloud Monitoring because they are a part of Google Cloud. The Dashboard page of the Vertex AI part is the best location to start utilizing the console:
Go to the Dashboard
Cloud Client Libraries
Vertex AI provides client libraries for specific languages to make calls to the Vertex AI API. The client libraries use each supported language's natural conventions and styles to offer an enhanced developer experience. Alternatively, you can use Dart or another language to access the Vertex AI API through the Google API Client Libraries. You create representations of the resources and objects the Google API Client Libraries use. This is simpler than working directly with HTTP requests and requires less code.
REST API
For managing jobs, models, and endpoints, as well as making predictions using hosted models on Google Cloud, the Vertex AI REST API offers RESTful services.
Deep Learning VM Images
Deep Learning VM Images is a collection of virtual machine pictures explicitly designed for data science and machine learning operations. Key ML frameworks and tools are pre-installed on every image. They can be used immediately to speed up data processing activities on instances with GPUs. Deep learning VM images support numerous different processor and framework combinations. Images for TensorFlow Enterprise, TensorFlow, PyTorch, and general high-performance computing are currently available. These images come in variants for CPU-only and GPU-enabled workflows.
Deep Learning Containers
A group of Docker containers called Deep Learning Containers come pre-installed with essential data science frameworks, libraries, and tools. You can quickly prototype and deploy workflows with these containers since they give you environments that are consistent and optimized for performance.
Vertex AI for AI Platform users
Vertex AI combines AutoML and the AI Platform into a single interface. For users who are already familiar with AI Platform, this section contrasts Vertex AI and AI Platform.
Custom training
Vertex AI offers an AutoML-based model and bespoke training, which is more akin to AI Platform Training.
Prediction
Vertex Explainable AI
Vertex Explainable AI and AI Explanations for AI Platform provide feature attributions for tabular and picture models.
Data labeling
With certain modifications to the API, the AI Platform Data Labeling Service is now available:
Additionally, Vertex AI now offers a new data labeling feature:
Google's specialized labelers often complete data labeling jobs. Instead of employing Google's specialists to execute the labeling jobs, you can construct a specialist pool that enables you to handle data labeling tasks using your workforce. Only an API request can presently access this functionality.
Vertex AI for AutoML users
This section compares the AutoML products to Vertex AI to assist AutoML users in understanding how to utilize Vertex AI. Look at the relevant tables to your use case and consider any adjustments that could affect your process.
General usage
Every user of Vertex AI should be aware of these variations.
AutoML Natural Language users
Vertex AI's AutoML Natural Language makes use of the text data type.
AutoML Vision and AutoML Video Intelligence users
AutoML Vision and AutoML Video in Vertex AI utilize the corresponding picture and video data formats.
AutoML Tables users
AutoML Tables use vertex AI's tabular data type.
API users
Migrate to Vertex AI
This section offers suggested actions and other details to assist you in organizing and carrying out a migration from AutoML products and the AI Platform to Vertex AI. Vertex AI supports all models and functionalities the AutoML and AI Platform offers. However, the client libraries do not support backward compatibility for client integration. To take advantage of Vertex AI features, you must prepare to move your resources.
Vertex AI should be used to create your code, job, dataset, or model if you are creating a new project. By doing so, you can benefit as soon as new features and service upgrades are made available. The AI Platform and AutoML are still available, but Vertex AI will receive future updates.
Recommended steps for migrating to Vertex AI
Use the suggested procedures below to convert your current models, datasets, jobs, and code from the AutoML and AI Platform to Vertex AI.
Migrating from AutoML
Follow these steps to switch your implementation from AutoML to Vertex AI:
Users of AutoML can learn more about the main distinctions between Vertex AI and AutoML at Vertex AI.
Review any prospective price adjustments (see Vertex AI migration pricing).
Take an inventory of your AutoML-accessible Google Cloud projects, code, jobs, datasets, models, and users. Utilize this data to select the resources to move and ensure the right users can access the resources after migration.
Review the IAM role updates before updating service accounts and resource authentication.
Review the information on the migration process and the list of resources you cannot migrate.
Use one of these two techniques to migrate your resources:
Implement the migration tool.
Utilize the client libraries and techniques for Vertex AI.
Learn more about Vertex AI's use of regional endpoints.
Finding which applications use AutoML APIs can help you select which method calls you wish to transfer.
Use the Vertex AI API and Vertex AI capabilities by updating your applications and processes.
Plan out the monitoring of your request quota.
Migrating from AI Platform
Follow these steps to switch your implementation from AI Platform to Vertex AI:
Users of the AI Platform can learn more about the main distinctions between Vertex AI and the AI Platform at Vertex AI.
Review any prospective price adjustments (see Vertex AI migration pricing).
Take an inventory of your AI Platform-accessible Google Cloud projects, code, jobs, datasets, models, and users. Utilize this data to select the resources to move and ensure the right users can access the resources after migration.
Review the IAM role updates before updating service accounts and resource authentication.
Review the information on the migration process and the list of resources you cannot migrate.
Use one of these two techniques to migrate your resources:
Implement the migration tool.
Utilize the client libraries and techniques for Vertex AI.
Learn more about Vertex AI's use of regional endpoints.
Finding which applications use AI Platform APIs can help you select which method calls you wish to transfer.
Use the Vertex AI API and Vertex AI capabilities by updating your applications and processes.
Plan out the monitoring of your request quota.
Vertex AI migration pricing
Migration is cost-free. Standard fees apply to newly developed resources as a result of migration. The migration of datasets from the AI Platform Data Labeling Service, AutoML Vision, AutoML Video Intelligence, and AutoML Natural Language to a cloud storage bucket entails storage expenses.
Legacy resources are still usable in the AutoML and AI Platform after migration. After you have confirmed that your objects have successfully migrated, shut down or destroy legacy resources to save money. A copy operation is a migration. Changes to the legacy resource do not impact the migrated resource once you migrate it.
Identify usage of AutoML and AI Platform APIs.
Some of your applications use the AutoML, and AI Platform APIs, and which techniques they employ may be ascertained. Use this information to decide whether it's necessary to move these API calls to Vertex AI.
See the following choices to determine which AutoML and AI Platform API calls you might want to migrate.
Visit the APIs & Services Dashboard for each of your projects to view a list of the products whose APIs it utilizes.
If enabled, you can look through the Cloud Audit Logs to see the audit logs generated by AutoML, AI Platform Training, and AI Platform Prediction.
Visit the AI Platform Training and Prediction API Metrics page to view use statistics for specific AI Platform Training and Prediction API methods.
Manage changes to IAM roles and permissions
Vertex AI offers the following Identity and Access Management (IAM) jobs:
aiplatform.admin
aiplatform.user
aiplatform.viewer
aiplatform.migrator
Resources may only be moved from AutoML and AI Platform to Vertex AI using aiplatform.admin and aiplatform.migrator. Resources cannot be moved using aiplatform.user or aiplatform.viewer.
Resources that cannot be migrated
AutoML Natural Language
Natural language models created by AutoML are not migratable.
The PDF text from AutoML Natural Language is converted to plain text using optical character recognition because Vertex AI does not support PDF text.
Empty datasets cannot be moved.
Jobs requiring batch prediction cannot be transferred.
AutoML Tables
AutoML Tables models made in an alpha version cannot be transferred.
Empty datasets cannot be moved.
Jobs requiring batch prediction cannot be transferred.
AutoML Video Intelligence
AutoML Video models made in an alpha version cannot be transferred.
Empty datasets cannot be moved.
Jobs requiring batch prediction cannot be transferred.
AutoML Vision
AutoML Vision models made in an alpha version cannot be moved.
Empty datasets cannot be moved.
Jobs requiring batch prediction cannot be transferred.
AI Platform
Not every model can be moved. Adaptable models have the following features:
1.15 or a later runtime version is required.
One of the following must be the framework:
TensorFlow
scikit-learn
XGBoost
Python must be version 3.7 or later.
An AI Platform model may migrate to Vertex AI but won't work if the serving-default value for the signature-name flag has been altered.
Customized prediction processes are not moved.
On the AI Platform, running jobs are not moved. The metadata for your records is available for download.
The Python scripts, packages, or Docker containers you use to run AI Platform Training are not automatically ported over to Vertex AI, but you can change your scripts.
About the migration process
Review the following details first, then migrate your resources.
A duplicate of your resources is made by the migrating tool.
Your AutoML and AI Platform datasets and models are duplicated by the migration tool on Vertex AI. Your historical materials are kept. To make multiple copies of a resource, you can migrate it numerous times.
The migrated models are not in use.
Before the model can be used to offer online predictions for data types that enable them, you must construct an endpoint and deploy it to it.
The migration tool automatically creates a training task simultaneously while migrating an AutoML model.
For some data types and objectives, migrated datasets might not contain the same data as the current dataset.
For some data kinds, datasets are not transferred over from the existing dataset but rather reimported from the original data source. The migrated dataset will reflect any modifications made to the original data source. The following data kinds and objectives are subject to this warning:
AutoML datasets for natural language entity extraction
AutoML datasets for object tracking and video classification
AutoML Vision datasets for object detection
As part of the migration procedure, exported tabular datasets are created.
A tabular dataset's data source is referenced in Vertex AI instead of imported. In your project, a migrated tabular dataset is created by exporting it from the AutoML Tables dataset, storing it as a CSV file in BigQuery or Cloud Storage, and then referencing it.
Use the migration tool.
You can move your datasets and models from AutoML and AI Platform to Vertex AI using the migration tool that Vertex AI offers.
Steps for using the migration tool
Follow these instructions to utilize the migration tool to transfer your datasets and models to Vertex AI.
Click Enable the Vertex AI API on the Vertex AI Dashboard page in the Google Cloud dashboard if you haven't already done so.
Click Set up migration under Migrate to Vertex AI on the Google Cloud console's Vertex AI Dashboard page.
Choose up to 50 assets to move under Select resources to migrate. Repeating these processes will allow you to move more assets if necessary.
Review the list of the assets you want to migrate, then click Next.
Then select Migrate Assets. Depending on how many assets are transferred, migration could take an hour or longer. When the migration is complete, the migration tool sends you an email.
Use the client libraries and methods to migrate resources
To migrate your resources, use the batchMigrateResources() method and associated methods.
Regional endpoints
The endpoints of the Vertex AI API are local. For instance:
us-central1-aiplatform.googleapis.com
Vertex AI does not support global endpoints.
Update training scripts to run in Vertex AI
The following adjustments are necessary for the Python scripts, packages, or Docker containers you use for AI Platform Training to function on Vertex AI.
In Vertex AI, you must specify the Cloud Storage URI for various output kinds using environment variables for jobs that upload outputs to Cloud Storage. The --job-dir command line parameter is generally used in AI Platform to provide the Cloud Storage URI.
The primary machine is referred to as the chief in Vertex AI by the TF_CONFIG variable. The term "master" is occasionally used in AI Platform.
Provide the Artifact Registry URI of a pre-built container corresponding to your framework and framework version when submitting a custom training job in Vertex AI.
You choose a runtime version that includes the framework and framework version you want to utilize in the AI Platform.
Vertex AI does not support every machine type that the AI Platform does.
Vertex AI does not support AI Platform Training's legacy machine kinds or scalability levels. Only the more recent versions of Compute Engine machines are supported.
The P4, T4, K80, P100, and V100 GPUs are supported.
TPUs cannot be used.
Hello image data: Set up your project and environment
Set up Vertex AI in your Google Cloud project. Then copy image files to a Cloud Storage bucket for training an AutoML image classification model. Several steps are included in this section:
Establishing your surroundings and project.
Loading photos and producing an image classification dataset.
Creating a model for automatic picture classification in AutoML.
Sending a forecast after deploying a model to an endpoint.
Reorganizing your project.
Every stage of the tutorial presupposes that you have followed the directions from the ones before it.
Prerequisite
Use the Google Cloud terminal to communicate with Google Cloud during this session. Before activating Vertex AI capability, finish the following tasks.
Create an account if you're new to Google Cloud to see how well our products work in practical situations. Additionally, new users receive $300 in complimentary credits to run, test, and deploy workloads.
Choose or create a Google Cloud project from the project selector page in the Google Cloud dashboard. Go to project selector.
Make sure your Cloud project's billing is enabled. Find out how to determine whether billing is enabled for a project.
Vertex AI API should be enabled. Enable the API
Establish a service account:
Navigate to the Create service account page in the console.
Go to Create service account.
Choose a project.
Enter a name in the Service account name field. Based on this name, the console populates the Service account ID column.
Enter a description in the Service account description area. Service account, as an illustration, for a quick start.
Click Create and continue.
Give your service account the following role(s) so that others can access your project: Project > Owner.
Choose a role from the list under "Select a role."
Click add Add another role to add each extra role you need.
Then click Continue.
To complete creating the service account, click Done.
Don't shut the browser window. The following action will make use of it.
Make a service account key:
Click the email associated with the service account you created in the console.
Click Keys.
Click Create new key after clicking Add key.
Click Create. On your computer, a JSON key file is downloaded.
Select Close.
Set the location to the JSON file containing your service account key as the value for the environment variable GOOGLE_APPLICATION_CREDENTIALS. If you open a new shell session, you must set the variable again because it only pertains to the current shell session.
Hello text data: Set up your project and environment
This section explains how to use Vertex AI to build a model for categorizing material. The tutorial uses a corpus of user-generated "happy moments" from the Kaggle open-source dataset HappyDB to train an AutoML model. The resulting model divides pleasant moments into groups corresponding to their root causes.
You will set up your Google Cloud project for this course section to use Vertex AI and a Cloud Storage bucket to house the documents for your AutoML model's training.
Set up your project
Use the Google Cloud terminal to communicate with Google Cloud during this session. Before activating Vertex AI capability, finish the following tasks.
Create an account if you're new to Google Cloud to see how well our products work in practical situations. Additionally, new users receive $300 in complimentary credits to run, test, and deploy workloads.
Select or create a Google Cloud project at the Google Cloud console's project selector page. Go to project selector.
Make sure your Cloud project's billing is enabled. Find out how to determine whether billing is enabled for a project.
Vertex AI API should be enabled. Enable the API
Activate Cloud Shell in the console. Activate Cloud Shell
A Cloud Shell session begins at the bottom of the console, and a command-line prompt appears. Cloud Shell is a shell environment that comes pre-configured for your current project with the Google Cloud CLI installed. The initialization of the session may take a few seconds.
Create a Cloud Storage bucket and copy the sample dataset
The documents you'll use to train your AutoML model should be placed in a Cloud Storage bucket that you create.
Open Cloud Shell.
The project ID should be entered into the PROJECT_ID variable.
export PROJECT_ID=PROJECT_ID
Set the BUCKET variable so that you can make a Cloud Storage bucket later.
export BUCKET=${PROJECT_ID}-lcm
Using the BUCKET variable, create a Cloud Storage bucket in the us-central1 region.
Data scientists may complete all their ML work in Vertex AI Workbench, from experimentation to deployment to managing and monitoring models. It is a fully managed, scalable, enterprise-ready computer infrastructure with user administration and security controls built on the Jupyter platform.
How efficient is Vertex AI?
The Vertex AI platform is excellent for creating specific learning machine templates. More than 1000GB of data can be added, and it is scalable. It takes relatively little time to build, launch, and compare.
When was Vertex AI introduced?
Vertex AI was introduced on May 18, 2021. Vertex AI has replaced the AI Platform (Unified). In the following regions, Vertex AI has expanded support for custom model training, custom model batch prediction, custom model online prediction, and a select few other services: us-west1.
Conclusion
In this article, we have extensively discussed Vertex AI. We have also explained the components of Vertex ai, tools to interact with them, migration steps to Vertex ai, how AI platform users and AutoML users can use Vertex AI and more in detail.