Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Last Updated: Mar 27, 2024

Vertex Explainable AI

Leveraging ChatGPT - GenAI as a Microsoft Data Expert
Speaker
Prerita Agarwal
Data Specialist @
23 Jul, 2024 @ 01:30 PM

Introduction

Vertex AI incorporates feature attributions using Vertex Explainable AI. A basic conceptual overview of the feature attribution techniques offered by Vertex AI is provided on this page. Data scientists may complete all their ML work in Vertex AI Workbench, from experimentation to deployment to managing and monitoring models. It is a fully managed, scalable, enterprise-ready compute infrastructure with user administration and security controls built on the Jupyter platform.

The details of Vertex Explainable AI, using TensorFlow with Vertex Explainable AI, configuring explanation, configuring visualisation settings, and access control with IAM are all explained in this blog.

Without further ado, let's get started.

Introduction to Vertex Explainable AI

Vertex AI explains the contribution of each feature in the data to the expected outcome. This data may then be used to check that the model is acting as expected, identify bias in your models, and gain suggestions for making your model and training data better.

For the following categories of models, Vertex AI provides Vertex Explainable AI:

  • AutoML image models (classification models only)
     
  • AutoML tabular models (classification and regression models only)
     
  • Custom-trained models based on tabular data
     
  • Custom-trained models based on image data
     

Let's get into the details of feature attributions.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

Feature attributions

The feature attributions show the contribution of each feature in your model to the predictions for each specific occurrence. Predicted values are returned when you request them and are appropriate for your model. You receive the predictions and information about feature attribution when you ask for explanations.

The ability to visualise image data is incorporated into feature attributions, which operate on tabular data. Think about the following instances:

  • Based on weather information and past ride-sharing data, a deep neural network is trained to estimate how long a bike journey will take. You can get anticipated bike ride durations in minutes if you simply ask for forecasts from this model. If you ask for explanations, you will receive the estimated time needed to ride a bike along with an attribution score for each attribute. The attribution scores demonstrate how much, in relation to the specified baseline value, the feature changed the prediction value. Select a useful baseline that fits your model, in this example, the median length of a bike ride. To see which features heavily influenced the outcome prediction, you can plot the feature attribution scores.
     
  • To determine whether a specific image comprises a dog or a cat, an image classification model is trained. You get a forecast for each image if you ask this model to make predictions for a fresh set of photographs ("dog" or "cat"). If you request explanations, you will receive the projected class and an overlay for the image that identifies the image pixels that had the greatest influence on the prediction.
     

Let's look at the details of advantages of feature attributions.

Advantages

You can gain more understanding of how your model functions if you analyse particular instances and also aggregate feature attributions throughout your training dataset. Think about the following benefits:

  • Model debugging: Feature attributions can be used to find problems in the data that traditional model evaluation methods would typically miss.
    For instance, on a test dataset of chest X-Ray scans, an image pathology model produced results that were surprisingly good. The radiologist's pen marks in the image were responsible for the model's excellent accuracy, according to feature attributions. The AI Explanations Whitepaper has further information about this example.
     
  • Model Op[timization: Model optimization can lead to more effective models by identifying and removing elements that are not as important.
     

Let's look at the limitations of feature attributions.

Limitations

The following are the feature attributions restrictions:

  • Each prediction has a unique feature attributions, including the importance of local features for AutoML. Although looking at the feature attributions for a specific prediction may yield useful information, the knowledge may not apply to the entire class for that particular instance or the entire model.
     
  • Although feature attributions can aid in model debugging, they don't always make it evident whether a problem stems from the model itself or from the training data. Diagnose typical data problems using your best judgment to reduce the number of likely causes.
     
  • In complicated models, feature attributions are vulnerable to the same adversarial attacks as predictions.
     

Let's look into the details of Differentiable and non-differentiable models.

Differentiable and non-differentiable models

You can determine the derivative of any operation in your TensorFlow graph in differentiable models. In such models, backpropagation is facilitated by this characteristic. Neural networks are one type of differentiable system. The integrated gradients approach can be used to obtain feature attributions for differentiable models.

Non-differentiable models incorporate non-differentiable TensorFlow graph operations, such as those that carry out encoding and rounding operations. A non-differentiable example is a model created using an ensemble of trees and neural networks. Use the sampled Shapley approach to obtain feature attributions for non-differentiable models. Although Sampled Shapley also applies to differentiable models, doing so incurs additional processing costs.

Let's dive into the Feature attribution methods.

Feature attribution methods

Each technique of feature attribution is founded on Shapley values, a cooperative game theory algorithm that gives each participant in the game credit for a certain result. This implies that each model feature is viewed as a "player" in the game when applied to machine learning models. Vertex Explainable AI gives each attribute a certain percentage of the blame for how a given forecast turned out.

Sampled Shapley Method

A sampling approximation of the precise Shapley values is provided by the sampled Shapley method. For feature importance, AutoML tabular models employ the sampled Shapley approach. These models, which are meta-ensembles of trees and neural networks, perform well with sampled Shapley.

Integrated Gradients Method

The integrated gradients approach involves calculating the gradient of the prediction output in relation to the input features along an integral path.

  • At various scaling parameter intervals, the gradients are determined. The Gaussian quadrature rule is used to calculate the size of each interval. (Consider this scaling option for image data to be a "slider" that is scaling the entire image to black.)
     
  • The gradients are combined in the following way:
    • A weighted average is used to roughly estimate the integral.
       
    • Calculated is the element-wise product of the original input and the averaged gradients.
       

XRAI Method

To discover which areas of the image contribute the most to a certain class prediction, the XRAI approach combines the integrated gradients method with additional phases.

  • Pixel-level attribution: For the input image, XRAI performs pixel-level attribution. The integrated gradients approach with a black baseline, and a white baseline is used by XRAI in this stage.
     
  • Oversegmentation: XRAI oversegments the image to produce a patchwork of small patches, regardless of pixel-level attribution. Felzenswalb's graph-based technique is applied by XRAI to produce the image segments.
     
  • Region selection: XRAI determines the attribution density of each segment by averaging the pixel-level attribution within it. These values are used by XRAI to rank each segment, and the segments are then arranged from most to least positive. This establishes which portions of the image are particularly noticeable or substantially influence a particular class prediction.
     

Let's look into the details of configuring explanations.

Configure explanations

When you construct the Model resource that you intend to use to seek explanations from, or when you deploy the model, or you submit a batch explanation job, you must select specific options in order to utilise Vertex Explainable AI with a custom-trained model.

When and where to configure explanations

Whenever you create or import a model, you must configure the explanations. Even if you haven't previously configured explanations, you can still configure them on a model that you have already generated.

Configure explanations when creating or importing models

Using the Model's explanationSpec field, you may specify a default configuration for all of a Model's explanations when you create or import it.

There are two techniques to build a specially trained model in Vertex AI:

  • Import a model.
     
  • To import a Model, create a unique TrainingPipeline resource.
     

You may set up the Model to allow Vertex Explainable AI in any scenario. The examples in this article presumptively include Model importation. Use the configuration parameters specified in this document in the TrainingPipeline's modelToUpload field to customise Vertex Explainable AI when you construct a custom-trained Model.

Configure explanations when deploying models or getting batch predictions

A DeployedModel is produced when you deploy a Model to an Endpoint resource. By filling out a DeployedModel explanationSpec field, you can define a default explanation configuration. This phase allows you to change the settings that were set when the Model was created.
The explanationSpec field in the BatchPredictionJob resource allows you to modify parts or all of the explanation settings when receiving batch predictions from a model and asking for explanations as part of your batch prediction request.
These choices can be helpful if, while creating the model, you forgot to include the explanationSpec field and later decide that you want explanations for the Model.

Override the configuration when getting online explanations

You can alter the Model's initial explanation settings when you receive online explanations, regardless of whether you generated or imported the Model with explanation settings or configured explanation settings during deployment.
You can change some of the explanation configuration that was previously established for the Model or the DeployedModel when you send an explain request to Vertex AI.

The following fields in the explain request can be modified:

  • Input baselines for any custom-trained model
     
  • Visualization configuration for image models
     
  • ExplanationParameters except for the method
     

Lets dive into using TensorFlow with Vertex Explainable AI.

Use TensorFlow with Vertex Explainable AI

There is certain information you require when working with TensorFlow models that have been specially trained in order to save your model and configure explanations.

Finding input and output tensor names during training

You need to be aware of the names of your model's input and output tensors in order to use a pre-built TensorFlow container to serve predictions. You include these names in an Explanation. When you set up a model for vertex explainable AI, you receive a metadata message.

The "basic technique" described in the next section can be used to choose these tensor names during training if your TensorFlow model satisfies the following requirements:

  • Your inputs aren't serialised.
     
  • The value of the feature is directly contained in each input to the model's SignatureDef (can be either numeric values or strings)
     
  • The results are numerical values that are handled like numerical data. Class IDs, which are categorical data, are excluded from this.

The basic method

Print the name attribute of the input and output tensors for your model as it is being trained. The underlying tensor name you require for your ExplanationMetadata is produced in the following example by the name field of the Keras layer:

Code: 

bow_inputs = tf.keras.layers.Input(shape=(2000,))
merged_layer = tf.keras.layers.Dense(256, activation="relu")(bow_inputs)
predictions = tf.keras.layers.Dense(10, activation="sigmoid")(merged_layer)
model = tf.keras.Model(inputs=bow_inputs, outputs=predictions)
print('input_tensor_name:', bow_inputs.name)
print('output_tensor_name:', predictions.name)

Output: 

input_tensor_name: input_1:0
output_tensor_name: dense_1/Sigmoid:0

When configuring your Model for explanations, you can use input 1:0 as the input tensor name and dense 1/Sigmod:0 as the output tensor name.

Adjusting training code and finding tensor names in the special cases

The input and output tensors for your ExplanationMetadata shouldn't match those in your serving SignatureDef in a few typical situations:

  • Your inputs are serialised.
     
  • Preprocessing activities are present in your graph.
     
  • Your serving outputs are not floating point tensors such as logits, probabilities, or other types.
     
  • To discover the appropriate input and output tensors in these situations, you should employ several techniques. Finding the tensors related to the feature values you wish to explain for inputs and the logits (pre-activation), probabilities (post-activation), or any other representation for outputs is the overarching objective.
     

Special cases for input tensors

If you supply your model with a serialised input or if your graph contains preprocessing operations, the inputs in your explanation metadata and your SignatureDef are different.

Serialized inputs

TensorFlow SavedModels may take a wide range of intricate inputs, such as:

  • Serialized tf.Example messages
     
  • JSON strings
     
  • Encoded Base64 strings (to represent image data)
     

Using these tensors directly as input for your explanations won't work or might yield in absurd results if your model allows serialised inputs like this. Instead, you should identify any additional input tensors feeding into feature columns in your model.
By calling a parsing function in the serving input function, you can add a parsing operation to your TensorFlow graph when you export your model. The tf.io module has a list of parsing functions that you can use. Tensors, which are typically returned by these parsing algorithms, are preferable options for your explanation metadata.
To export your model, for instance, use tf.parse_example(). A serialised tf is needed. A dictionary of tensors is sent to feature columns as an example message. Its output can be used to complete the metadata for your explanation. You should obtain the names of the indices, values, and dense_shape tensors and enter them in the respective fields in the metadata if some of these outputs are tf.SparseTensor, which is a named tuple made up of three tensors.

After a decoding operation, the example below demonstrates how to obtain the name of the input tensor:

Code:

float_pixels = tf.map_fn(
    lambda img_string: tf.io.decode_image(
        img_string,
        channels=color_depth,
        dtype=tf.float32
    ),
    features,
    dtype=tf.float32,
    name='input_convert'
  )
print(float_pixels.name)

Preprocessing inputs

You could want to get explanations on the tensors after the preprocessing stage if your model graph involves preprocessing actions. In this situation, you may use the name property of tf.Tensor to obtain the names of those tensors and add them to the explanation metadata:

Code: 

item_one_hot = tf.one_hot(item_indices, depth,
    on_value=1.0, off_value=0.0,
    axis=-1, name="one_hot_items:0")
print(item_one_hot.name)

Special cases for output tensors 

The outputs in your serving SignatureDef are often either logits or probabilities.

You must identify the output tensor names that match the logits if your model assigns probabilities, but you want to explain the required logit values instead.

Special considerations for integrated gradients

You must ensure that your inputs are differentiable with regard to the output if you wish to use the Vertex Explainable AI's integrated gradients feature attribution approach.
The explanation metadata logically distinguishes between the characteristics and inputs of a model. You must also supply the encoded (and differentiable) version of that feature when employing integrated gradients with an input tensor that is not differentiable from the output tensor.

If your input tensors are non-differentiable or your graph contains non-differentiable operations, use the following strategy:

  • Make the non-differentiable inputs differentiable by encoding them as such.
     
  • Set encoded tensor name to the name of the tensor's encoded, differentiable form and input tensor name to the name of the input's original, non-differentiable tensor.
     

Explanation metadata file with encoding

Think about a categorical feature in a model that has a zip_codes:0 input tensor. The input tensor zip_codes:0 cannot be differentiated since the input data contains zip codes as strings. The input tensor is differentiable if the model also preprocesses the data to get a one-hot encoding representation of the zip codes. You may give it the name zip codes_embedding:0 to set it apart from the initial input tensor.
When configuring your Model for explanations, set the ExplanationMetadata as follows to use the information from both input tensors in your explanations request:

  • Choose a descriptive name for the input feature key, such zip_codes.
     
  • Put the original tensor's name in the input tensor name and zip_codes:0
     
  • After one-hot encoding, change the encoded tensor name to the tensor's name zip_codes_embedding:0
     
  • Set encoding to COMBINED_EMBEDDING.
     

Code: 

{
    "inputs": {
      "zip_codes": {
        "input_tensor_name": "zip_codes:0",
        "encoded_tensor_name": "zip_codes_embedding:0",
        "encoding": "COMBINED_EMBEDDING"
      }
    },
    "outputs": {
      "probabilities": {
        "output_tensor_name": "dense/Softmax:0"
      }
    }
}

Export TensorFlow SavedModels for Vertex Explainable AI

Export a TensorFlow model as a SavedModel after training it. The TensorFlow SavedModel includes the serialised signatures, variables, and other resources required to operate the graph in addition to your learned TensorFlow model. A function in your graph that accepts tensor inputs and produces tensor outputs is designated by a SignatureDef in the SavedModel.

Depending on whether you're using TensorFlow 2 or TensorFlow 1, follow the procedures in one of the following sections to make sure your SavedModel is compatible with Vertex Explainable AI.

TensorFlow 2

To save your model when using TensorFlow 2.x, use tf.saved_model.save. When you save your model, input signatures can be specified. Vertex Explainable AI employs the default serving function for your explanation requests if you just have one input signature. When you save your model, you should indicate the serving default function's signature if you have several input signatures:

Code: 

tf.saved_model.save(m, model_dir, signatures={
    'serving_default': serving_fn,
    'xai_model': model_fn # Required for XAI
    })

In this instance, Vertex Explainable AI fulfils your explanations request using the model function signature you saved with the xai_model key. For the key, use the exact string xai_model.
You must also give the signatures of your model function and preprocessing function if you utilise a preprocessing function. As the keys, you must precisely use the strings "xai_preprocess" and "xai_model":

Code: 

tf.saved_model.save(m, model_dir, signatures={
    'serving_default': serving_fn,
    'xai_preprocess': preprocess_fn, # Required for XAI
    'xai_model': model_fn # Required for XAI
    })

When you ask Vertex Explainable AI for an explanation, it applies your preprocessing function and your model function. Ensure that the input your model function expects and the output of your preprocessing function is the same.

TensorFlow 1.15

You should not use tf.saved_model.save sparingly if you're using TensorFlow 1.15. Models saved using this technique for TensorFlow 1 are not supported by Vertex Explainable AI.
If your model was created and trained in Keras, you must export it as a SavedModel after converting it to a TensorFlow Estimator. The topic of this section is model preservation.

Following the creation, compilation, training, and assessment of your Keras model, you must:

  • Using tf.keras.estimator.model_to_estimator, convert the Keras model to a TensorFlow Estimator.
     
  • Using tf.estimator.export.build_raw_serving_input_receiver_fn, provide a serving input function.
     
  • Using tf.estimator.export_saved_model, export the model as a SavedModel.
     

Code:

# Building, compiling, training, and evaluate your Keras model
model = tf.keras.Sequential(...)
model.compile(...)
model.fit(...)
model.predict(...)


## Converting your Keras model to an Estimator
keras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='export')


## Defining a serving input function appropriate for your model
def serving_input_receiver_fn():
  ...
  return tf.estimator.export.ServingInputReceiver(...)


## Export the SavedModel to Cloud Storage, using your serving input function
export_path = keras_estimator.export_saved_model(
  'gs://' + 'YOUR_BUCKET_NAME',
  serving_input_receiver_fn
).decode('utf-8')
print("Model exported to: ", export_path)

Get tensor names from a SavedModel's SignatureDef

For as long as your explanation metadata satisfies the requirements for the "basic technique" mentioned in the preceding section, you can prepare it using a TensorFlow SavedModel's SignatureDef. If you don't have access to the training code that created the model, this might be useful.
Use the SavedModel CLI to look at your SavedModel's SignatureDef. Find out more about using the SavedModel CLI.

Consider the following instance of SignatureDef:

The given SavedModel SignatureDef contains the following input(s):

Code:

  inputs['my_numpy_input'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: x:0

The given SavedModel SignatureDef contains the following output(s):

Code:

  outputs['probabilities'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 1)
      name: dense/Softmax:0

Method name is: tensorflow/serving/predict

The graph has two tensors: an output tensor called dense/Softmax:0 and an input tensor called x:0. Use x:0 as the input tensor name and dense/Softmax:0 as the output tensor name when configuring your Model for explanations, respectively, in the ExplanationMetadata message.

Let's look into the details of configuring vizualization settings.

Configure visualization settings 

Your image data can be visualised using built-in capabilities from Vertex Explainable AI. Custom-trained image models' visualisations can be set up.
An image overlay displaying the pixels (integrated gradients) or areas (integrated gradients or XRAI) that contributed to the prediction is provided when you request an explanation for an image classification model.
Whether you choose to visualise your explanations using an integrated gradients approach or an XRAI approach depends on the sort of data you're working with.

  • In general, XRAI works better with natural images and offers a better high-level summary of findings, such as demonstrating how positive attribution is connected to a dog's face shape.
     
  • Integrated gradients (IG) are helpful for locating more specific attributions since they frequently include information at the pixel level.
     

Get started

When you build a Model resource that supports Vertex Explainable AI or when you modify the Model's ExplanationSpec, configure visualisation.

Fill out the visualisation field of the InputMetadata message with data for the feature you want to visualise in order to configure visualisation for your model. You can specify settings like the overlay type to be utilised, which attributions are highlighted, colour, and more in this configuration message. Every setting is up for grabs.

Visualization options

The default and suggested settings depend on how credit is given (integrated gradients or XRAI). The configuration options and potential applications are listed below.

  • type: Outlines or pixels were utilised as the visualisation type. If you're using integrated gradients, you must specify this parameter; if you're using XRAI, you cannot specify this field.
    The field defaults to OUTLINES, which displays regions of attribution for integrated gradients. Set the field to PIXELS in order to display per-pixel attribution.
     
  • polarity: The attributions that were highlighted's direction. The default setting of positive highlights locations with the most positive attributions. This entails emphasising the pixels that had the most impact on the model's accurate forecast. When the polarity is set to negative, the model's failure to forecast the positive class is highlighted. By spotting erroneous negative regions, a negative polarity might be helpful for debugging your model. Additionally, if you set polarity to both, both positive and negative attributions will be displayed.
     
  • clip_percent_upperbound: Removes attributions from the highlighted areas that are higher than the specified percentile. Filtering away the noise and making it simpler to notice areas of strong attribution can both be accomplished by using the clip settings collectively.
     
  • clip_percent_lowerbound: Removes attributions from the highlighted areas that fall below the specified percentile.
     
  • color_map: The palette used to highlight the places on the map. For integrated gradients, the default setting is pink green, which displays positive attributions in green and negative ones in pink. The colour map for XRAI visualisations is a gradient. Viridis, the XRAI default, highlights the places with the greatest influence in yellow and the least influence in blue.
     
  • overlay_type: The manner in which the visualisation presents the source image. If the original image makes it difficult to see the visualisation, adjusting the overlay can assist.

Example configurations

The sample Visualization sets and images below provide a starting point for you to experiment with different settings.

Integrated gradients

In the case of integrated gradients, if the attribution areas are too noisy, you might need to change the clip values.

Code:

visualization: {
  "type": "OUTLINES",
  "polarity": "positive",
  "clip_percent_lowerbound": 70,
  "clip_percent_upperbound": 99.9,
  "color_map": "pink_green",
  "overlay_type": "grayscale"
}

XRAI

As the overlay uses a gradient to represent areas of high and low attribution, we advise beginning with no clip values for XRAI visualisations.

Code:

visualization: {
  "type": "PIXELS",
  "polarity": "positive",
  "clip_percent_lowerbound": 0,
  "clip_percent_upperbound": 100,
  "color_map": "viridis",
  "overlay_type": "grayscale"
}

Let's look into the details of improved explanations.

Improve explanations

You can set up particular parameters when using models that have been specially trained to enhance your explanations. This article explains how to check the explanations you receive from Vertex Explainable AI for errors and how to modify the settings of Vertex Explainable AI to reduce errors.
You don't need to do any configuration if you wish to utilise Vertex Explainable AI with an AutoML tabular model; Vertex AI automatically configures the model for Vertex Explainable AI. Instead of reading this, go to Getting explanations.
All of the feature attribution techniques used by Vertex Explainable AI are based on Shapley value variations. Vertex Explainable AI offers approximations rather than the actual values since Shapley values require much processing work.

By altering the following inputs, you can decrease the approximation error and get closer to the precise values:

  • Increasing the number of pathways or integral steps.
     
  • Modifying the input baseline(s) that you choose.
     
  • Adding extra baselines for input. Utilizing extra baselines causes latency to rise when using the integrated gradients and XRAI approaches. The sampled Shapley approach does not introduce latency with extra baselines.
     

Inspect explanations for the error

You can check the explanations for approximation inaccuracy after requesting and receiving explanations from Vertex Explainable AI. High approximation inaccuracy indicates that the explanations may not be trustworthy. Various methods of error detection are described in this section.

Check the approximationError field

Vertex Explainable AI delivers an approximation error in the approximationError field for each Attribution. Consider modifying your Vertex Explainable AI setup if your approximation error exceeds 0.05.
In order to determine the approximation error for the integrated gradients technique, we compare the sum of the feature attributions to the variance between the predicted values for the input score and the baseline score. The feature attribution in the integrated gradients technique is a rough approximation of the gradient values between the baseline and the input. Since the Gaussian quadrature rule is more precise than Riemann Sum techniques, we utilise it to approximate the integral.

Checking the difference between predictions and baseline output

Vertex Explainable AI returns two values for each Attribution: an instanceOutputValue that represents the portion of the prediction output for which feature attributions are relevant, and a baselineOutputValue that indicates what this portion of the prediction output would be if the prediction were made using an input baseline as opposed to the actual input instance.
You might need to modify your input baselines if any attributions have a difference between instanceOutputValue and baselineOutputValue of less than 0.05.

Adjust your configuration

The parts that follow explain how to change your Vertex Explainable AI setup to cut down on mistake. A new Model resource must be configured with an updated ExplanationSpec in order to perform any of the following modifications, or you can modify the ExplanationSpec of an existing Model by redeploying it to an Endpoint resource or by obtaining fresh batch predictions.

Increase steps or paths

You can increase:

  • the number of sampled Shapley paths (SampledShapleyAttribution.pathCount)
  • integrated gradients attribute's step count (IntegratedGradientsAttribution.stepCount) or XRAI (XraiAttribution.stepCount)

Adjust baselines

A feature that offers no new information is input baselines. In regard to your training data, baselines for tabular models can be the median, minimum, maximum, or random values. A black image, a white image, a grey image, or an image with random pixel values can all be used as baselines for image models.

The input baselines field can be optionally specified while configuring Vertex Explainable AI. If not, Vertex AI makes your input baseline selections. You may want to change the input baselines for each input in your Model if you are experiencing the issues detailed in earlier portions of this book.

In General, the following steps are followed:

  • Create a baseline representing the median values to begin.
     
  • This baseline should now reflect random values.
     
  • Try two baselines, one for the low and one for the high values.
     
  • Add a second baseline with random values.
     

Example for tabular data

The Python code that comes after produces an ExplanationMetadata message for a TensorFlow model that was fictitiously trained on tabular data.
There are several baselines that can be specified in the list input baselines. This sample only establishes one standard. A list of the training data's median values (in this example, train data) constitutes the baseline.

Code:

explanation_metadata = {
    "inputs": {
        "FEATURE_NAME": {
            "input_tensor_name": "INPUT_TENSOR_NAME",
            "input_baselines": [train_data.median().values.tolist()],
            "encoding": "bag_of_features",
            "index_feature_mapping": train_data.columns.tolist()
        }
    },
    "outputs": {
        "OUTPUT_NAME": {
            "output_tensor_name": "OUTPUT_TENSOR_NAME"
        }
    }
}

Example for image data

For a hypothetical TensorFlow model that was trained on picture data, the following Python code generates an ExplanationMetadata message.
There are several baselines that can be specified in the list input baselines. This sample only establishes one standard. A set of arbitrary values make up the baseline. If there are a lot of black and white photos in your training dataset, using random values as an image baseline is a suitable strategy.
If not, set input baselines to [0, 1] to represent images that are only in black and white.

Code:

random_baseline = np.random.rand(192,192,3)
explanation_metadata = {
    "inputs": {
        "FEATURE_NAME": {
            "input_tensor_name": "INPUT_TENSOR_NAME",
            "modality": "image",
            "input_baselines": [random_baseline.tolist()]
        }
    },
    "outputs": {
        "OUTPUT_NAME": {
            "output_tensor_name": "OUTPUT_TENSOR_NAME"
        }
    }
}

Improve explanations - AutoML

You can set up particular settings when dealing with AutoML image models to enhance your justifications. All of the feature attribution techniques used by Vertex Explainable AI are based on Shapley value variations. Vertex Explainable AI offers approximations rather than the actual values since Shapley values require a lot of processing work.

By altering the following inputs, you can decrease the approximation error and get closer to the precise values:

  • Increasing the number of pathways or integral steps.

Increasing steps

For reducing the approximation error, you can increase:

  • The Number of integral UI steps

It's time to dive into the details of access control with IAM.

Access control with IAM 

Resource access is controlled by Vertex AI using Identity and Access Management (IAM). Access can be controlled at the resource or project level. Assigning a principle to one or more roles will allow you to provide them access to resources at the project level (user, group, or service account). Set an IAM policy on a specific resource to enable access; the resource must support resource-level policies. The policy specifies which principals are given specific roles.

Vertex AI supports a variety of IAM role types, including:

  • Predefined Role: At the project level, predefined roles enable you to grant your Vertex AI resources a group of linked permissions.
     
  • Basic Role: All Google Cloud services share the basic roles of Owner, Editor, and Viewer, which control access to your Vertex AI resources at the project level.
     
  • Custom Role: You can design your own role with a chosen set of permissions, grant the position to people in your business, and manage custom roles.
     

Consult the instructions on granting, altering, and revoking access to learn how to add, amend, or remove these roles from your Vertex AI project.

Project-level versus resource-level policies

Project-level policies are unaffected by resource-level policy settings. Every policy from an ancestor is passed down to a resource. These two granularity levels can be used to tailor permissions. To read all resources in the project, for instance, you could grant users read access at the project level. Then, you could grant users write permissions for each resource (at the resource level).

Supported resources

Vertex AI supports entity-type resources and Vertex AI Feature Store featurestores. It may take some time for changes to take effect after access is granted or denied to a resource.

Service Account

A service account is a unique account that is utilised by a virtual machine (VM) instance or an application, not a person. To give a resource or application a certain set of permissions, you can establish and assign permissions to service accounts.

Service Agents

Service agents are automatically given Google-managed service accounts that allow a service to access resources on your behalf.

Only if you run custom training code to train a specially trained model does the Vertex AI Custom Code Service Agent get formed.

Grant Vertex AI service agents access to other resources

A Vertex AI service agent may occasionally need to be given new roles. For instance, you would need to grant the service agent one or more additional responsibilities if you want Vertex AI to access a Cloud Storage bucket in a different project.

Grant access to Vertex AI to resources in your home project

To provide a service agent for Vertex AI extra roles in your personal project:

  • For your home project, go to the IAM page of the console.
     
  • Check the option Include Google-provided role grants.
     
  • Select the service agent to whom the permissions should be given, then click the edit pencil icon.
     
  • To discover the Vertex AI service agents, use the filter Principal:@gcp-sa-aiplatform-cc.iam.gserviceaccount.com.
     
  • Give the service account the necessary roles, then save your modifications.
     

Grant access to Vertex AI to resources in a different project

You must grant the Vertex AI service account permissions in the project where you use data sources or destinations. You create the Vertex AI service account following the launch of the first asynchronous job (for example, creating an endpoint). By utilising the gcloud CLI and adhering to these procedures, you may also explicitly create the Vertex AI service account.

To grant Vertex AI permissions in another project:

  • For your home project, go to the IAM page of the console (the project where you are using Vertex AI).
     
  • Check the option Include Google-provided role grants.
     
  • Choose the service agent to whom you want to grant permissions, and make a note of their email address (listed under Principal).
     
  • To discover the Vertex AI service agents, use the filter Principal:@gcp-sa-aiplatform-cc.iam.gserviceaccount.com.
     
  • Change projects to the project where the permissions need to be granted.
     
  • Then, in New principals, click Add and enter the email address.
     
  • Add each necessary role, then click Save.
     

Provide access to Google Sheets

You must share your Google Sheet with the Vertex AI service account if you use an external BigQuery data source that is supported by Google Sheets. You create the Vertex AI service account following the launch of the first asynchronous job (for example, creating an endpoint). By following these instructions, you can also explicitly create the Vertex AI service account using the gcloud CLI.

In order to grant Vertex AI access to your Sheets file:

  • Navigate to the console's IAM page.
     
  • Locate and copy the email address of the service account with the name Vertex AI Service Agent (listed under Principal).
     
  • Share your Sheets document with that address by opening it.
     

Frequently Asked Questions

What is Vertex AI Workbench?

Vertex AI Workbench is the lone environment where data scientists can finish all of their machine learning (ML) tasks, including experimentation, deployment, model management, and monitoring. It is a Jupyter-based compute infrastructure that is fully managed, scalable, and business-ready. It also has user management and security controls.

What is the advantage of using explainable AI?

Humans can benefit from explainable AI by better comprehending and explaining machine learning (ML), deep learning, and neural networks. ML models are frequently viewed as opaque "black boxes" that cannot be understood.  

Is Vertex AI replacing AI platforms?

Current AI Platform Pipelines will be replaced with Vertex Pipelines in order to automate, oversee, and manage your ML systems using an orchestrator. You will be able to use pipelines created using both Tensorflow Extended and the Kubeflow Pipelines SDK.

Conclusion

In this article, we have extensively discussed the details of Vertex Explainable AI along with the details of Using TensorFlow with Vertex Explainable AI, configuring explanation, Configuring visualization settings, and details of access control with IAM. 

We hope that this blog has helped you enhance your knowledge regarding Vertex Explainable AI, and if you would like to learn more, check out our articles on Google Cloud Certification. You can refer to our guided paths on the Coding Ninjas Studio platform to learn more about DSADBMSCompetitive ProgrammingPythonJavaJavaScript, etc. To practice and improve yourself in the interview, you can also check out Top 100 SQL problemsInterview experienceCoding interview questions, and the Ultimate guide path for interviews. Do upvote our blog to help other ninjas grow. Happy Coding!!

Thank You Image
Topics covered
1.
Introduction
2.
Introduction to Vertex Explainable AI
3.
Feature attributions
3.1.
Advantages
3.2.
Limitations
3.3.
Differentiable and non-differentiable models
3.4.
Feature attribution methods
3.4.1.
Sampled Shapley Method
3.4.2.
Integrated Gradients Method
3.4.3.
XRAI Method
4.
Configure explanations
4.1.
When and where to configure explanations
4.1.1.
Configure explanations when creating or importing models
4.1.2.
Configure explanations when deploying models or getting batch predictions
4.1.3.
Override the configuration when getting online explanations
5.
Use TensorFlow with Vertex Explainable AI
5.1.
Finding input and output tensor names during training
5.1.1.
The basic method
5.1.2.
Adjusting training code and finding tensor names in the special cases
5.2.
Export TensorFlow SavedModels for Vertex Explainable AI
5.2.1.
TensorFlow 2
5.2.2.
TensorFlow 1.15
5.3.
Get tensor names from a SavedModel's SignatureDef
6.
Configure visualization settings 
6.1.
Get started
6.2.
Visualization options
6.3.
Example configurations
6.3.1.
Integrated gradients
6.3.2.
XRAI
7.
Improve explanations
7.1.
Inspect explanations for the error
7.2.
Check the approximationError field
7.3.
Checking the difference between predictions and baseline output
7.4.
Adjust your configuration
7.4.1.
Increase steps or paths
7.4.2.
Adjust baselines
7.4.3.
Example for tabular data
7.4.4.
Example for image data
8.
Improve explanations - AutoML
8.1.
Increasing steps
9.
Access control with IAM 
9.1.
Project-level versus resource-level policies
9.2.
Supported resources
9.3.
Service Account
9.4.
Service Agents
9.5.
Grant Vertex AI service agents access to other resources
9.6.
Grant access to Vertex AI to resources in your home project
9.7.
Grant access to Vertex AI to resources in a different project
9.8.
Provide access to Google Sheets
10.
Frequently Asked Questions
10.1.
What is Vertex AI Workbench?
10.2.
What is the advantage of using explainable AI?
10.3.
Is Vertex AI replacing AI platforms?
11.
Conclusion