Table of contents
1.
Introduction
2.
TensorFlow
3.
How to install TensorFlow?
4.
The Computational Graph
4.1.
Python
5.
Variables
6.
Placeholders
7.
Linear Regression model using TensorFlow
8.
tf.contrib.learn
9.
What are TensorFlow APIs?
9.1.
1. TensorFlow Core
9.2.
2. TensorFlow Estimators
9.3.
3. TensorFlow Keras
10.
Frequently Asked Questions
10.1.
What is the difference between TensorFlow and other deep learning frameworks?
10.2.
Can TensorFlow be used for tasks other than deep learning?
10.3.
Is TensorFlow suitable for beginners in machine learning?
11.
Conclusion
Last Updated: Aug 16, 2024
Easy

What is Tensorflow in Python

Author Rinki Deka
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

TensorFlow is a powerful open-source library used for machine learning & deep learning in Python. It was developed by Google & allows you to create, train & deploy machine learning models. With TensorFlow, you can build neural networks, perform complex mathematical computations & analyze large datasets. 

What is Tensorflow in Python

In this article, we will discuss the basics of TensorFlow, like installation, computational graphs, variables, placeholders & building a simple linear regression model. 

TensorFlow

TensorFlow is a free & open-source software library for machine learning & artificial intelligence. It was created by the Google Brain team & released in 2015. TensorFlow uses a graph-based approach, where nodes in the graph represent mathematical operations & the edges represent the data, called tensors, that flow between them. This allows TensorFlow to efficiently perform complex computations on large datasets.
 

TensorFlow provides a wide range of tools & resources for building & deploying machine learning models. It supports multiple programming languages, including Python, C++, Java & Go. TensorFlow also has a large & active community of developers who contribute to its development & provide support through forums, tutorials & documentation.
 

One of the key features of TensorFlow is its ability to run on multiple platforms, including CPUs, GPUs & TPUs (Tensor Processing Units). This allows you to train & deploy models on a variety of devices, from desktop computers to mobile phones & embedded systems.
 

TensorFlow has been used in many real-world applications, such as image & speech recognition, natural language processing, recommendation systems & robotics. It has also been used in research to advance the field of machine learning & develop new algorithms & techniques.

How to install TensorFlow?

Installing TensorFlow is a straightforward process. Let’s see the steps to install TensorFlow using pip, which is the package installer for Python:

1. Open a terminal or command prompt.
 

2. Create a new virtual environment (optional but recommended):

python -m venv tensorflow_env


3. Activate the virtual environment:

- On Windows:

tensorflow_env\Scripts\activate


- On macOS & Linux

source tensorflow_env/bin/activate


4. Install TensorFlow using pip:

pip install tensorflow


5. Verify the installation by importing TensorFlow in a Python script:

import tensorflow as tf
print(tf.__version__)


If you have a GPU & want to use it for accelerated computations, you can install the GPU version of TensorFlow

pip install tensorflow-gpu


Note: Make sure you have the necessary NVIDIA drivers & CUDA toolkit installed for GPU support.


That's it! You now have TensorFlow installed & ready to use in your Python projects.

The Computational Graph

TensorFlow uses a computational graph to represent the flow of data & operations in a machine learning model. A computational graph is a directed graph where the nodes represent operations (such as addition, multiplication, or activation functions) & the edges represent the data (tensors) that flow between the operations.

For example : 

  • Python

Python

import tensorflow as tf



# Create two input nodes

a = tf.constant(5, name='a')

b = tf.constant(3, name='b')


# Create an operation node

c = tf.add(a, b, name='c')


# Create a session to run the graph

with tf.Session() as sess:

   result = sess.run(c)

   print(result) 
You can also try this code with Online Python Compiler
Run Code


Output

8


In this example, we create two input nodes `a` & `b` with constant values of 5 & 3, respectively. We then create an operation node `c` that adds the values of `a` & `b`. Finally, we create a session to run the graph & retrieve the result of the computation.
 

The computational graph allows TensorFlow to optimize the execution of the model by parallelizing operations, distributing computations across multiple devices & minimizing the memory usage. It also enables TensorFlow to perform automatic differentiation, which is essential for training neural networks using backpropagation.
 

When you build a machine learning model in TensorFlow, you typically define the computational graph first & then run it in a session to perform the actual computations & update the model parameters.

Variables

In TensorFlow, variables are used to store and update the parameters of a machine learning model during training. Variables are mutable tensors that can hold and update their values across multiple iterations of the graph execution.


For example : 


import tensorflow as tf

# Create a variable with an initial value
weights = tf.Variable(tf.random_normal([784, 10]), name='weights')
biases = tf.Variable(tf.zeros([10]), name='biases')

# Define the model
x = tf.placeholder(tf.float32, [None, 784])
y = tf.nn.softmax(tf.matmul(x, weights) + biases)

# Define the loss function and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train_step = optimizer.minimize(cross_entropy)

# Create a session and initialize the variables
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer()) 
    # Train the model
    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
    
    # Evaluate the model
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

 

In this example, we create two variables: `weights` and `biases`. The `weights` variable is initialized with random values from a normal distribution, while the `biases` variable is initialized with zeros.
 

We then define the model using these variables. The `x` placeholder represents the input data, and the `y` tensor represents the output of the model, which is computed by applying the softmax function to the matrix multiplication of `x` and `weights`, plus the `biases`.

 

We define the loss function using cross-entropy and create an optimizer (gradient descent) to minimize the loss. The `train_step` operation updates the variables based on the gradients computed during backpropagation.


Finally, we create a session, initialize the variables, and train the model by running the `train_step` operation iteratively. After training, we evaluate the model's accuracy on the test set.


Note: Variables are a fundamental concept in TensorFlow and are used very frequently in building and training machine learning models.

Placeholders

Placeholders in TensorFlow are used to feed input data to the computational graph. They allow you to define the input shape and type without providing the actual values until the graph is executed in a session.

For example : 

import tensorflow as tf

# Create placeholders for input features and labels
features = tf.placeholder(tf.float32, shape=[None, 3])
labels = tf.placeholder(tf.float32, shape=[None, 1])

# Define the model
weights = tf.Variable(tf.random_normal([3, 1]))
biases = tf.Variable(tf.zeros([1]))
output = tf.matmul(features, weights) + biases

# Define the loss function and optimizer
loss = tf.reduce_mean(tf.square(output - labels))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss)

# Create input data
input_data = [[1.0, 2.0, 3.0],
              [4.0, 5.0, 6.0],
              [7.0, 8.0, 9.0]]
input_labels = [[10.0],
                [20.0],
                [30.0]]

# Create a session and run the graph
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())   
    # Train the model
    for _ in range(1000):
        _, loss_value = sess.run([train_op, loss], feed_dict={features: input_data, labels: input_labels})    
    # Make predictions
    predictions = sess.run(output, feed_dict={features: input_data})
    print("Predictions:", predictions)


In this example, we create two placeholders: `features` and `labels`. The `features` placeholder represents the input features and has a shape of `[None, 3]`, where `None` means that the first dimension (batch size) can be of any size. The `labels` placeholder represents the corresponding labels and has a shape of `[None, 1]`.


We define the model using variables (`weights` and `biases`) and the placeholders. The output of the model is computed by multiplying the `features` with `weights` and adding `biases`.


We define the loss function as the mean squared error between the model's output and the labels. We create an optimizer (gradient descent) to minimize the loss.


We provide input data (`input_data` and `input_labels`) that will be fed to the placeholders during graph execution.


Finally, we create a session and run the graph. We train the model by running the `train_op` operation iteratively, feeding the input data through the `feed_dict` argument. After training, we make predictions by running the `output` tensor with the input data.


Note: Placeholders allow you to feed different input data to the graph in each session run, which makes it flexible and reusable for different datasets.

Linear Regression model using TensorFlow

Let's build a simple linear regression model using TensorFlow to predict housing prices based on the size of the houses.

import tensorflow as tf
import numpy as np
# Generate random input data
num_samples = 100
house_sizes = np.random.randint(low=1000, high=3500, size=num_samples)
house_prices = house_sizes * 100.0 + np.random.randint(low=20000, high=70000, size=num_samples)
# Normalize the input data
def normalize(array):
    return (array - array.mean()) / array.std()

house_sizes_norm = normalize(house_sizes)
house_prices_norm = normalize(house_prices)

# Create placeholders for input features and labels
size = tf.placeholder(tf.float32, name='size')
price = tf.placeholder(tf.float32, name='price')

# Define the variables and model
intercept = tf.Variable(0.0, name='intercept')
slope = tf.Variable(0.0, name='slope')
predicted_price = intercept + slope * size

# Define the loss function and optimizer
loss = tf.reduce_mean(tf.square(predicted_price - price))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)

# Create a session and run the graph
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())    
    # Train the model
    for _ in range(1000):
        for (x, y) in zip(house_sizes_norm, house_prices_norm):
            sess.run(train_op, feed_dict={size: x, price: y})    
    # Retrieve the learned parameters
    intercept_value, slope_value = sess.run([intercept, slope])
    print("Learned parameters:")
    print("Intercept:", intercept_value)
    print("Slope:", slope_value)  
    # Make predictions
    test_house_sizes = [1500, 2000, 2500]
    test_house_sizes_norm = normalize(test_house_sizes)
    predicted_prices_norm = sess.run(predicted_price, feed_dict={size: test_house_sizes_norm})
    predicted_prices = predicted_prices_norm * house_prices.std() + house_prices.mean()    
    print("\nPredictions:")
    for i in range(len(test_house_sizes)):
        print("House size:", test_house_sizes[i], "sq.ft")
        print("Predicted price:", round(predicted_prices[i], 2), "dollars")


In this example, we generate random input data for house sizes and prices. We normalize the data to have zero mean and unit variance, which helps in convergence during training.
 

We create placeholders for the input features (`size`) and labels (`price`). We define the variables (`intercept` and `slope`) and the linear regression model (`predicted_price = intercept + slope * size`).
 

We define the loss function as the mean squared error between the predicted prices and the actual prices. We create an optimizer (gradient descent) to minimize the loss.
 

We create a session and run the graph. We train the model by iteratively running the `train_op` operation, feeding the input data through the `feed_dict` argument.
 

After training, we retrieve the learned parameters (`intercept` and `slope`) and make predictions for new house sizes. We denormalize the predicted prices to obtain the actual price values.
 

The output will show the learned parameters (intercept and slope) and the predicted prices for the given house sizes.


Note: The predicted prices may vary each time you run the code due to the random generation of input data.

tf.contrib.learn

`tf.contrib.learn` is a high-level TensorFlow library that simplifies the process of creating, configuring, training, and evaluating machine learning models. It provides a set of pre-built estimators and a simplified API for common machine learning tasks.

Let’s see an example that shows the use of `tf.contrib.learn` for building a neural network classifier:

import tensorflow as tf
from tensorflow.contrib.learn import DNNClassifier
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load the Iris dataset
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
# Define the feature columns
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Create a DNN classifier
classifier = DNNClassifier(hidden_units=[10, 20, 10], n_classes=3, feature_columns=feature_columns)
# Train the classifier
classifier.fit(x=X_train, y=y_train, steps=200)
# Make predictions
predictions = list(classifier.predict(X_test))
# Calculate accuracy
accuracy = accuracy_score(y_test, predictions)
print("Accuracy:", accuracy)


In this example, we use the Iris dataset, which consists of measurements of sepal length, sepal width, petal length, and petal width for three different species of Iris flowers.
 

We start by splitting the dataset into training and testing sets using `train_test_split` from scikit-learn.
 

We define the feature columns using `tf.contrib.layers.real_valued_column`. In this case, we have four real-valued features.
 

We create a `DNNClassifier` (Deep Neural Network Classifier) using `tf.contrib.learn`. We specify the architecture of the neural network by providing the number of hidden units in each layer (`hidden_units`) and the number of classes (`n_classes`). We also pass the feature columns to the classifier.
 

We train the classifier using the `fit` method, specifying the training data (`X_train` and `y_train`) and the number of training steps.
 

After training, we make predictions on the test set using the `predict` method and calculate the accuracy using `accuracy_score` from scikit-learn.
 

The output will display the accuracy of the trained classifier on the test set.

 

Note: Make sure you have scikit-learn installed (`pip install scikit-learn`) to run this example.


`tf.contrib.learn` provides several other estimators, such as `LinearClassifier`, `LinearRegressor`, and `DNNRegressor`, which can be used for different types of machine learning tasks. It also supports features like input functions, feature columns, and distributed training.

What are TensorFlow APIs?

TensorFlow provides a set of APIs (Application Programming Interfaces) at different levels of abstraction to build and train machine learning models. The main TensorFlow APIs are:

1. TensorFlow Core

   - Low-level API that provides fine-grained control over the model's architecture and execution.

   - Suitable for experienced TensorFlow users who need flexibility and customization.

   - Requires manual management of sessions, placeholders, and variables.

2. TensorFlow Estimators

   - High-level API that simplifies the process of creating, training, and evaluating models.

   - Provides pre-built models and a consistent interface for common machine learning tasks.

   - Suitable for beginners and rapid prototyping.

   - Abstracts away the details of session management and graph construction.

3. TensorFlow Keras

   - High-level API for building and training deep learning models.

   - Provides a user-friendly and intuitive interface for creating neural networks.

   - Supports sequential and functional API styles for model definition.

   - Integrates well with other TensorFlow APIs and can be used in conjunction with Estimators.
 

For example : 

import tensorflow as tf
from tensorflow import keras
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Load the Iris dataset
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
# Scale the input features
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Create a neural network model using TensorFlow Keras
model = keras.Sequential([
    keras.layers.Dense(10, activation='relu', input_shape=(4,)),
    keras.layers.Dense(10, activation='relu'),
    keras.layers.Dense(3, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=1)
# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)


In this example, we use the Iris dataset and split it into training and testing sets. We scale the input features using `StandardScaler` from scikit-learn.

We create a neural network model using the TensorFlow Keras Sequential API. We define the architecture of the model by adding layers using `keras.layers`. In this case, we have two hidden layers with ReLU activation and an output layer with softmax activation.

We compile the model using `model.compile()`, specifying the optimizer, loss function, and evaluation metrics.

We train the model using `model.fit()`, providing the training data, number of epochs, and batch size.

Finally, we evaluate the trained model on the test set using `model.evaluate()` and print the test loss and accuracy.

The output will display the training progress and the final test loss and accuracy.

Frequently Asked Questions

What is the difference between TensorFlow and other deep learning frameworks?

TensorFlow is an open-source library that provides flexibility, scalability & production-readiness. It supports multiple languages & platforms, making it versatile for various use cases.

Can TensorFlow be used for tasks other than deep learning?

Yes, TensorFlow is a general-purpose numerical computation library. It can be used for various tasks, including machine learning, statistical modeling & mathematical operations.

Is TensorFlow suitable for beginners in machine learning?

TensorFlow offers high-level APIs like Keras that simplify the process of building & training models. These APIs are beginner-friendly & provide a good starting point for learning machine learning with TensorFlow.

Conclusion

In this article, we explained the fundamentals concepts of TensorFlow, which is a powerful open-source library for machine learning & deep learning in Python. We discussed the installation process, the concept of computational graphs, variables, placeholders & building a linear regression model. We also talked about the high-level APIs which are provided by the TensorFlow, like tf.contrib.learn & TensorFlow Keras, which simplify the model development process. 

You can also check out our other blogs on Code360.

Live masterclass