**Introduction**

Before moving to the vectorized form of logistic regression, let us briefly discuss logistic regression. Logistic regression is the supervised machine learning algorithm that is used for both classification and regression purposes. The output of the logistic regression is the probability of the target value. Logistic regression can be used in many classifications like predicting malignant and benign tumor of a breast cancer patient, spam email prediction.

Also Read, __Resnet 50 Architecture__

**Vectorizing Logistic Regression**

Let us consider a training set having M training examples. To train a dataset having M training examples every time we need to go through the four propagation steps for each training example in the dataset.

Let x_{(i) }be the first training example.

Here w and x will be of size (n by 1)

We will calculate the following steps for the dataset.

Apart from using an explicit loop for M training examples, we can do all the calculations using a single operation.

Let us declare X as M training inputs. The size of matrix X will be (N_{x }by M). W is (N_{x }by 1) matrix, where W represents weights of the logistic regression model. N_{x} is the number of features.

Now we will declare another matrix ~~Z~~ where we will calculate z^{(i)} for all the training examples in the dataset.

~~Z~~ consists of all the z^{(i)}. The size of ~~Z~~ will be of (1,m) numpy array.

In numpy, we can calculate the value of ~~Z~~ using a single operation, as shown below.

Now we will calculate activation for each z^{(i) }using the sigmoid activation function. Storing the results of all the activation of ~~Z~~ in A matrix as shown below.

Here a^{(i)} stores activation for the respective z^{(i)}.