Table of contents
1.
Introduction
2.
TensorFlow Enterprise
2.1.
TensorFlow Enterprise distribution
2.2.
Long Term Version Support
2.3.
Packages Included
2.4.
Performance
3.
Overview of Vertex AI Workbench user-managed notebooks instances
3.1.
Prerequisite
3.2.
Create a user-managed notebooks instance
3.3.
Open the notebook
3.4.
Run a classification tutorial in your notebook instance
4.
Use TensorFlow Enterprise with Deep Learning VM
4.1.
Prerequisite
4.2.
Create a Deep Learning VM instance
4.3.
Connect with SSH, open a notebook, and run a classification tutorial 
5.
Use TensorFlow Enterprise with a local Deep Learning Containers instance 
5.1.
Prerequisite
5.1.1.
Install gcloud CLI and Docker
5.1.2.
Set up your local machine
5.2.
Create a Deep Learning Containers instance
5.3.
Open a JupyterLab notebook and run a classification tutorial
6.
Frequently Asked Questions
6.1.
What can you build with TensorFlow?
6.2.
Does DeepMind use TensorFlow?
6.3.
Can TensorFlow be used commercially?
7.
Conclusion
Last Updated: Mar 27, 2024

TensorFlow Enterprise

Author Nagendra
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

You can construct and manage virtual machine (VM) instances that come pre-configured with JupyterLab using Vertex AI Workbench user-managed notebook instances. A collection of deep learning packages, including support for the TensorFlow and PyTorch frameworks, are preinstalled on user-managed notebook instances. Both CPU-only and GPU-enabled instances can be configured.
This blog describes TensorFlow Enterprise in detail, as well as user-managed notebook instances of Vertex AI Workbench, using TensorFlow Enterprise with Deep Learning VM and with a local Deep Learning Containers instance.

Without further ado, let's get started.

TensorFlow Enterprise

For a range of Google Cloud AI products, TensorFlow Enterprise offers a seamless, scalable, and supported TensorFlow development experience. A Google Cloud optimised distribution of TensorFlow is available to customers in all versions of TensorFlow Enterprise, and some versions of TensorFlow Enterprise additionally come with Long Term Version Support.

Let's look into the details of TensorFlow Enterprise distribution

TensorFlow Enterprise distribution

Custom TensorFlow binaries and related packages can be found in the TensorFlow Enterprise distribution. All the packages included in the TensorFlow Enterprise distribution are open source, and each version is based on a specific version of TensorFlow.

The following products can make use of TensorFlow Enterprise:

  • Vertex AI
     
  • Deep Learning VM Images
     
  • Deep Learning Containers
     
  • AI Platform Training
     

Let's look into the details of Long Term Version Support.

Long Term Version Support

When utilised on Google Cloud, specific versions of the TensorFlow Enterprise distribution are supported for three years with security upgrades and some bug patches. Each major release of TensorFlow includes a year of support for the most recent minor version for all users. For instance, TensorFlow 1.15 released security updates a year after its initial release. The support period for versions of the TensorFlow Enterprise distribution that come with Long Term Version Support is three years.
A fork of TensorFlow did not result in the TensorFlow Enterprise distribution. Patches and bug fixes from the TensorFlow Enterprise distribution will be reflected in the mainline TensorFlow source repository while it is still accessible in Google Cloud products.

Let's look into the details of packages included with TensorFlow Enterprise.

Packages Included

TensorFlow Enterprise comes with the following packages. As they become available and have undergone rigorous testing, these packages are updated with non-API-breaking changes.

  • TensorFlow-io
     
  • TensorFlow-estimator
     
  • TensorFlow-probability
     
  • TensorFlow-datasets
     
  • TensorFlow-hub
     
  • fairness-indicators included with TensorFlow 2.3
     
  • many more

Performance

Every AI initiative revolves around data. Simply put, a lot of data must be gathered and stored in order to train a deep learning model, and with the development and expansion of accelerators like GPUs and Cloud TPUs, the speed at which the data can be transferred from its storage location to the training process is becoming more and more crucial.
In 2015, Google made one of the most well-known machine learning frameworks, TensorFlow, open source. TensorFlow's developers offer enterprise-grade support and performance for those using Google Cloud, even though it is intended for all users. For organisations on Google Cloud that are using AI, we recently announced TensorFlow Enterprise.

Let's look into the details of the overview of Vertex AI Workbench.

Overview of Vertex AI Workbench user-managed notebooks instances

You can construct and manage virtual machine (VM) instances that come pre-configured with JupyterLab using Vertex AI Workbench user-managed notebook instances. A collection of deep learning packages, including support for the TensorFlow and PyTorch frameworks, are preinstalled on user-managed notebook instances. Both CPU-only and GPU-enabled instances can be configured.
By using a user-managed notebooks instance URL, you can access your user-managed notebooks instances, which are secured by Google Cloud authentication and permission. User-managed notebook instances can sync with a GitHub repository and are integrated with GitHub.

Prerequisite

You need to have a Google Cloud project, and the Notebooks API enabled for that project before you can build a user-managed notebooks instance.

  • Create an account if you're new to Google Cloud to see how well our products work in practical situations. Additionally, new users receive $300 in complimentary credits to run, test, and deploy workloads.
     
  • Choose or create a Google Cloud project from the project selector page in the Google Cloud dashboard.
     
  • Make sure your Cloud project's billing is enabled. Find out how to determine whether billing is enabled for a project.
     
  • Enable the Notebooks API.
     

Let's look into the details of the overview to create a user-managed notebooks instance.

Create a user-managed notebooks instance

Follow these instructions to construct a standard TensorFlow Enterprise 2.6 user-managed notebooks instance. You can either go to notebook.new to go directly to the Advanced options instance creation dialogue box, or you can see Create a user-managed notebooks instance with specific properties.

  • Go to the User-managed notebooks page in the Google Cloud console.
     
  • Click Add_Box New Notebook.
     
  • Select Without GPUs after choosing TensorFlow Enterprise 2.6.
     
  • Press Create.
     
  • The instance is launched immediately by Vertex AI Workbench. Vertex AI Workbench turns on an Open JupyterLab link once the instance is prepared for use.
     

Open the notebook

Follow these procedures to open a user-managed notebooks instance:

  • Click Open JupyterLab next to the name of your user-managed notebooks instance in the Google Cloud dashboard.
     

JupyterLab launches in your user-managed notebooks instance.

Run a classification tutorial in your notebook instance

Run a classification lesson after completing the following steps to test out your new notebook:

  • Open the tutorials folder in the JupyterLab folder File Browser by double-clicking it, then find and open tutorials/keras/basic_classification.ipynb.
     
  • Click the play button to run the tutorial's cells.
     

Let's look into the details of using TensorFlow Enterprise with Deep Learning VM.

Use TensorFlow Enterprise with Deep Learning VM

A collection of virtual machine pictures called Deep Learning VM Images has been designed with data science and machine learning applications in mind. A number of essential ML frameworks and tools are pre-installed on each image. On instances with GPUs, you can use them right out of the box to speed up data processing activities. Deep Learning VM images are available to support a wide range of processor and framework combinations. Currently, images for PyTorch, TensorFlow Enterprise, and general high-performance computing are available, with variants for CPU-only and GPU-enabled workflows.

Prerequisite

  • Create an account if you're new to Google Cloud to see how well our products work in practical situations. Additionally, new users receive $300 in complimentary credits to run, test, and deploy workloads.
     
  • Choose or create a Google Cloud project from the project selector page in the Google Cloud dashboard.
     
  • Make sure your Cloud project's billing is enabled. Find out how to determine whether billing is enabled for a project.
     

Let's look into the details of creating a Deep Learning VM instance.

Create a Deep Learning VM instance

Follow these steps to establish a TensorFlow Enterprise Deep Learning VM instance:

  • Navigate to the Google Cloud console's Deep Learning VM Cloud Marketplace page.
     
  • Select Compute Engine > Launch. Select the project in which to construct the instance if a project selection window appears. If you are launching a Compute Engine for the first time, you must wait for the initial API configuration procedure to finish.
     
  • Enter a deployment name on the New Deep Learning VM deployment page. The name of your virtual machine will start with this. When creating your instance, Compute Engine adds the -vm prefix to this name.
     
  • Select None for Number of GPUs. They are not necessary to follow the directions in this manual.
     
  • Select TensorFlow Enterprise 2.3 under Framework (CUDA 11.0).
     
  • You can keep the other parameters alone for this illustration.
     
  • To deploy, click.
     

Your first Deep Learning VM instance has just been generated. The Deployment Manager opens after the instance has been created. Here, you can manage various deployments in addition to your Deep Learning VM instance.

Let's look into the details of connecting with SSH, open a notebook, and run a classification tutorial 

Connect with SSH, open a notebook, and run a classification tutorial 

To establish an SSH connection to your Deep Learning VM instance, launch a JupyterLab notebook, and conduct a lesson on utilising neural networks with Keras, follow these instructions:

  • You can carry out these steps in either Cloud Shell or any environment that has the Google Cloud CLI installed. The gcloud CLI can be used to communicate with your instance.
    • In Google Cloud, click the Activate Cloud Shell button in the upper-right corner if you want to use Cloud Shell.
       
    • Download and install Google Cloud CLI on your local computer if you wish to use it.
       
  • To establish an SSH connection to your instance, enter the following command into Cloud Shell or a local terminal window. My-project-id, My-Zone, and My-Instance-Name should be changed to reflect the appropriate information.
     

Code:

gcloud compute ssh --project my-project-id --zone my-zone \
  my-instance-name -- -L 8080:localhost:8080
  • Visit http://localhost:8080 in your local browser to access a JupyterLab notebook that comes with your instance by default.
     
  • Double-click the tutorials folder in the notebook's left-hand column to open it. Then, find and open tutorials/tf2 course/01 neural nets with keras.ipynb.
     
  • To run the tutorial's cells, click the play button.
     

Let's look into the details of using TensorFlow Enterprise with a local Deep Learning Containers instance 

Use TensorFlow Enterprise with a local Deep Learning Containers instance 

A group of Docker containers called Deep Learning Containers come pre-installed with essential data science frameworks, libraries, and tools. You can quickly prototype and deploy workflows with these containers since they give you environments that are consistent and optimised for performance.

Prerequisite

Install Docker and the Google Cloud CLI by following these instructions, then configure your local machine.

Install gcloud CLI and Docker

Install Docker and the gcloud CLI on your local computer by following these instructions.

  • On your system, download and install gcloud CLI. To communicate with your instance, use the gcloud CLI.
     
  • Download and install Docker
     

Set up your local machine

Follow these instructions to configure your local machine.

  • Use the following command to add your username to the docker group if you're running a Linux-based operating system, such as Ubuntu or Debian, so that you can run Docker without the need for sudo. Substitute your username with USERNAME.

After adding yourself to the docker group, your system might need to be restarted.

Command:

sudo usermod -a -G docker USERNAME
  • Launch Docker. Run the following Docker command, which produces the current time and date, to confirm that Docker is up and running:

Command:

docker run busybox date
  • Use gcloud as Docker's credential helper:

Command:

gcloud auth configure-docker

Create a Deep Learning Containers instance

For the kind of local container you wish to generate, finish the steps below to create a TensorFlow Enterprise Deep Learning Containers instance.

  • Use the following command if you don't need to use a GPU-enabled container. The path to the local directory you intend to utilise should be substituted with /path/to/local/dir.

Command:

docker run -d -p 8080:8080 -v /path/to/local/dir:/home \
  gcr.io/deeplearning-platform-release/tf2-cpu.2-3
  • Use the following command to use a GPU-enabled container. The path to the local directory you intend to utilise should be substituted with /path/to/local/dir.

Command:

docker run --runtime=nvidia -d -p 8080:8080 -v /path/to/local/dir:/home \
  gcr.io/deeplearning-platform-release/tf2-gpu.2-3

This command maps port 8080 on the container to port 8080 on your local system, launches the container in detached mode and mounts the local directory /path/to/local/dir to /home in the container.

Open a JupyterLab notebook and run a classification tutorial

A JupyterLab server is already started by the container. Open a JupyterLab notebook by following these instructions, then run the categorization lesson.

  • Visit http://localhost:8080 in your local browser to open a JupyterLab notebook.
     
  • Double-click the tutorials folder on the left to open it. Then, find and open tutorials/tf2_course/01_neural_nets_with_keras.ipynb.
     
  • To run the tutorial's cells, click the play button.
     

Frequently Asked Questions

What can you build with TensorFlow?

You can use it for sentiment analysis, voice recognition, language detection, text summarization, image recognition, video detection, time series, and more.

Does DeepMind use TensorFlow?

Yes, DeepMind uses TensorFlow, which gives us a rare chance to significantly accelerate our research program by allowing us to carry out our ambitious research aims at a much larger size and even faster rate.

Can TensorFlow be used commercially?

Yes, TensorFlow is a free and open-source framework that can be used for business purposes.

Conclusion

In this article, we have extensively discussed the details of TensorFlow Enterprise along with the details of Vertex AI Workbench user-managed notebooks instances, using TensorFlow Enterprise with Deep Learning VM and with a local Deep Learning Containers instance.

We hope that this blog has helped you enhance your knowledge regarding TensorFlow Enterprise, and if you would like to learn more, check out our articles on Google Cloud Certification. You can refer to our guided paths on the Coding Ninjas Studio platform to learn more about DSADBMSCompetitive ProgrammingPythonJavaJavaScript, etc. To practice and improve yourself in the interview, you can also check out Top 100 SQL problemsInterview experienceCoding interview questions, and the Ultimate guide path for interviews. Do upvote our blog to help other ninjas grow. Happy Coding!!

Thank You Image
Live masterclass