Table of contents
1.
Introduction
2.
Transitioning from Container Registry
2.1.
Standard Repositories 
2.2.
Repositories with gcr.io Domain Support
3.
Setting up repositories
3.1.
Container Registry
3.2.
Artifact Registry
4.
Transitioning to Standard Repositories
5.
Changes for Building and Deploying in Google Cloud 
6.
Changes for Docker 
7.
Migrating Containers from a Third-party Registry
7.1.
Set up Pre-requisites
7.2.
Identify Images to Migrate
7.3.
Copy Identified Images to Artifact Registry
7.4.
Verify that Permissions to the Registry are Correctly Configured, 
7.5.
Update Manifests for your Deployments
7.6.
Re-deploy your Workloads
8.
Frequently Asked Questions
8.1.
What do you understand by Docker?
8.2.
What is Cloud Logging?
8.3.
What is GCP?
8.4.
What services does GCP provide?
8.5.
What is a container?
9.
Conclusion
Last Updated: Mar 27, 2024

Overview of Artifact Registry

Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Artifact Registry is a service for container image storage along with management on Google Cloud. As a fully-managed service having support for both non-container artifacts and container images, it extends the capabilities of Container Registry and even provides new features to its clients.

Transitioning from Container Registry

You can transition from container registries to Artifact Registry using one of these options:

Standard Repositories 

They are regular Artifact Registry repositories supporting all features and entirely separate from any existing Container Registry hosts.

Repositories with gcr.io Domain Support

They are special repositories that are mapped to Container Registry gcr.io hostnames. These repositories support:

  • Redirect traffic from gcr.io hostnames to the corresponding gcr.io repositories in your project.
  • gcloud container images commands

Setting up repositories

In Artifact Registry, you must make or create repositories before you can push images to them. So a crucial part of moving to Artifact Registry is setting up Artifact Registry repositories and integrating them into your CI/CD automation.

To provide greater flexibility, there are changes in how Artifact Registry represents repositories.

Container Registry

Each multi-regional location is related to a single storage bucket. Managing your images into repositories under a hostname is optional. Consider the following e.g. that shows the image webapp in 3 locations:

us.gcr.io/my-project/webapp
us.gcr.io/my-project/team1/webapp
us.gcr.io/my-project/team2/webapp


Repositories are only an organizing tool that does not restrict access. Any user with access to the storage bucket for us.gcr.io in this project can go through all versions of the webapp container image.

Artifact Registry

Each repository is an independent resource in your project. Since each repository is a special or unique resource, you can do the following:

  • Give each repository a description, name, and labels
  • Configure repository-specific permissions
  • Create multiple repositories in the exact location


Also, the location of a repository can be a multi-region or a region.

These changes provide you with more control over your repositories. For e.g., suppose you have teams in São Paulo and Sydney. In that case, you can create a repository for each team in a region closer to the nearest multi-regional location.

southamerica-east1-docker.pkg.dev/my-project/team1/webapp
australia-southeast1-docker.pkg.dev/my-project/team2/webapp


You can then give each team permissions to their team repository only.

Transitioning to Standard Repositories

The steps for transitioning to standard repositories are as follows:

  • Learn about pricing for Artifact Registry before you begin the transition.
  • Enable the Artifact Registry API.
     
gcloud services enable artifactregistry.googleapis.com

 

  • Update the Google CLI and learn about the new gcloud commands. Optionally, set up defaults for gcloud commands.
     
gcloud components update

 

  • Create a Docker repository for your containers. It would be good if you created a repository before you can push images to it.
  • Grant permissions to the repository.
  • Configuration authentication so that you can connect with your new repository.
  • If needed, copy images from Container Registry that you want to use in your new repository.
  • Try pushing and pulling your containers.
     
docker push eu.gcr.io/my-project/my-image
docker push europe-north1-docker.pkg.dev/my-project/my-repo/my-image

 

  • Try deploying your images to a runtime environment.
  • Configure additional features.
  • Clean up images in the container registry when the transition is complete.

Changes for Building and Deploying in Google Cloud 

  • Enable the Artifact Registry API.
    You must enable the Artifact Registry API. Cloud Build and runtime environments such as GKE and Cloud Run do not automatically enable the API for you.
     
gcloud services enable
artifactregistry.googleapis.com \
cloudbuild.googleapis.com

 

  • Create the target Docker repository if it doesn't already exist. You must create a repository before you can push any images to it. Pushing an image can't trigger the creation of a repository and the Cloud Build service account does not have permissions to create repositories.
     
  • Tag, build, and push an image to the repository using Cloud Build, with a Dockerfile or a build config file. The following e.g. command is the same as the Container Registry example but uses an Artifact Registry repository path for the image.
     
gcloud builds submit --tag us-central1-docker.pkg.dev/my-project/my-repo/my-image:tag1

 

  • Deploy the image to a Google Cloud runtime environment like GKE or Cloud Run. The following e.g. command is the same as the Container Registry example but uses the Artifact Registry repository path for the image.
     
gcloud run deploy my-service --image us-central1-docker.pkg.dev/my-project/my

Changes for Docker 

At a high level, the workflow for utilizing Docker with Artifact Registry or Container Registry is the same.

However, a shortcut for Container Registry is combining the user and administrator roles into a single workflow. This shortcut is standard in:

  • Tutorials and Quickstarts where you are testing in an environment with broad permissions.
  • Workflows that utilize Cloud Build, since the Cloud Build service account has permissions to include a registry host in a similar Google Cloud project.

The shortcut workflow is like this:

  • Enable the Container Registry API.
  • Grant permissions to the account that will access Artifact Registry.
  • Authenticate to the registry. The simplest authentication option is using the Docker credential helper in Google Cloud CLI. This is a one-time configuration step.
     
gcloud auth configure-docker

 

  • Build and tag the image. For example, this command builds and tags the image gcr.io/my-project/my-image:tag1:
     
docker build -t gcr.io/my-project/my-image:tag1

 

  • Push the image to the registry. For example:
     
docker push gcr.io/my-project/my-image:tag1

 

  • If the gcr.io registry host does not exist in the project, Container Registry adds the host before uploading the image.
  • Pull the image from the registry or deploy it to a Google Cloud runtime. For example:
     
docker pull gcr.io/my-project/my-image:tag1

Migrating Containers from a Third-party Registry

Migration of your container images includes the following steps:

Set up Pre-requisites

  • Verify your permissions. You must have the Editor IAM or Owner role in the projects where you migrate images to Artifact Registry.
     
  • Go to the project selector page
    • Select the Google Cloud project where you want to use Artifact Registry
    • In the console, go to Cloud Shell
    • Find your project ID and set it in Cloud Shell. Replace YOUR_PROJECT_ID with your project ID.
    • gcloud config set project YOUR_PROJECT_ID
       
  • Export the following environment variables:
     
  export PROJECT=$(gcloud config get-value project)

 

  • Enable the Artifact Registry, BigQuery, and Cloud Monitoring APIs with the following command:
     
gcloud services enable \
artifactregistry.googleapis.com \
stackdriver.googleapis.com \
logging.googleapis.com \
monitoring.googleapis.com

 

  • If you are not currently utilizing Artifact Registry, configure a repository for the images:
  • Make a repository
  • Configure authentication for third-party clients that need access to the repository.
  • Verify that the newest Go version is installed. To check the version of an existing Go installation, use the command:
     
go find version

Identify Images to Migrate

  • Go through your Dockerfile files and deployment manifests for references to third-party registries
  • Calculate pull frequency of images from third-party registries using BigQuery and Cloud Logging.

Copy Identified Images to Artifact Registry

After identifying images from third-party registries, you are ready to copy them to Artifact Registry. The gcrane tool enables you with the copying process.

  • Make a text file images.txt in Cloud Shell with the names of the images you identified. For example:
     
ubuntu:18.04
debian:buster
hello-world:latest
redis:buster
jupyter/tensorflow-notebook

 

  • Download gcrane.
     
 GO111MODULE=on go get github.com/google/go-containerregistry/cmd/gcrane

 

  • Create a script named copy_images.sh to copy your list of files.
     
#!/bin/bash
images=$(cat images.txt)

if [ -z "${AR_PROJECT}" ]
then
    echo ERROR: AR_PROJECT must be set before running this
    exit 1
fi

for img in ${images}
do
    gcrane cp ${img} LOCATION-docker.pkg.dev/${AR_PROJECT}/${img}
done

 

  • Replace LOCATION with the region or multi-region of your repository.
  • Make the script executable:
     
chmod +x copy_images.sh

 

  • Run the script to copy the files:
     
AR_PROJECT=${PROJECT}
./copy_images.sh

Verify that Permissions to the Registry are Correctly Configured, 

( Particularly if Artifact Registry and your Google Cloud deployment environment are in different projects )

By default, Google Cloud CI/CD services have entry to Artifact Registry in the same Google Cloud project.

  • Cloud Build can pull and push images
  • Runtime environments such as Cloud Run, GKE, the App Engine flexible environment, and Compute Engine can pull images.

If you need to pull and pull images across projects or use third-party tools in your pipeline that need to access Artifact Registry, make sure that permissions are configured perfectly before you update and re-deploy your workloads.

Update Manifests for your Deployments

Update your Dockerfiles and your manifests to refer to Artifact Registry instead of the third-party registry with the help of the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80


This updated version of the manifest points to an image on us-docker.pkg.dev.
 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: us-docker.pkg.dev/<AR_PROJECT>/nginx:1.14.2
        ports:
        - containerPort: 80

Re-deploy your Workloads

To re-deploy workloads with your updated manifests, you have to keep track of new image pulls by running the given query in the BigQuery console:

SELECT`
FORMAT_TIMESTAMP("%D %R", timestamp) as timeOfImagePull,
REGEXP_EXTRACT(jsonPayload.message, r'"(.*?)"') AS imageName,
COUNT(*) AS numberOfPulls
FROM
  `image_pull_logs.events_*`
GROUP BY
  timeOfImagePull,
  imageName
ORDER BY
  timeOfImagePull DESC,
  numberOfPulls DESC

 

All new image pulls should be from Artifact Registry and must have the string docker.pkg.dev.

Frequently Asked Questions

What do you understand by Docker?

Docker is an open-source containerization platform. It packages your application and all its dependencies together in containers to ensure that your application works seamlessly during development, test, or production.

What is Cloud Logging?

Cloud Logging is a service that enables you to store, search, analyze, monitor, and alert on logging data and events from Google Cloud and Amazon Web Services.

What is GCP?

GCP is a provider of public clouds. Customers can use GCP and other cloud suppliers to utilize computer resources located in Google's data centers worldwide for free or on a pay-per-use basis.

What services does GCP provide?

GCP provides a full range of computing services, including tools for managing GCP costs, managing data, delivering web content and online video, and using AI and machine learning.

What is a container?

Containers are the abstraction at the app layer that contains dependencies and code together.

Conclusion

I hope this article provided you with insights into the Artifact registry in GCP, the Container registry, and what changes should be made for building and deploying it on google cloud.

Refer to our guided paths on Coding Ninjas Studio to learn more about DSA, Competitive Programming, System Design, JavaScript, etc. Enroll in our courses, refer to the mock test and problems available, interview puzzles, and look at the interview bundle and interview experiences for placement preparations.

We hope this blog has helped you increase your knowledge regarding AWS Step functions, and if you liked this blog, check other links. Do upvote our blog to help other ninjas grow. Happy Coding!"

Grammarly report: Report

Live masterclass