Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
DSA Interview Questions
3.
DSA Interview Questions for Freshers
3.1.
1. What is a data structure? Give an example of a commonly used data structure.
3.2.
2. What are various data structures available?
3.3.
3. What is an algorithm? Give an example of a commonly used algorithm.
3.4.
4. Why do we need to do algorithm analysis?
3.5.
5. What is a stack, and how is it implemented?
3.6.
6. What is a queue in data-structure and how is it implemented?
3.7.
7. What is the difference between a stack and a queue?
3.8.
8. What is a binary search tree, and how is it implemented?
3.9.
9. What is the difference between a breadth-first search and a depth-first search?
3.10.
10. What is the difference between quicksort and mergesort?
4.
DSA Interview Questions for Intermediate
4.1.
11. How do you find the maximum value in a binary tree?
4.2.
12. What is dynamic programming, and how is it used to solve problems?
4.3.
13. How do you implement a breadth-first search?
4.4.
14. Given a sorted array of integers, write a program to remove all duplicates in place and return the new length of the array. What is the time complexity of your algorithm?
4.5.
15. Given a binary search tree, find the second smallest element in the tree. What is the time complexity of your algorithm?
4.6.
16. Write a program to find merge overlapping intervals from a list of intervals. What is the time complexity of your algorithm?
4.7.
17. Given two sorted arrays, write a program to find the median of the merged array in O(log n) time, where n is the total number of elements in both arrays.
4.8.
18. Implement a stack using two queues. What is the time complexity of push and pop operations in your implementation?
4.9.
19. How do you implement Dijkstra's algorithm, and what is its time complexity?
4.10.
20. What is a suffix tree, and how is it used to solve problems?
5.
DSA Interview Questions for Experienced
5.1.
21. Write a program to find the shortest path between two nodes in a weighted graph using A* algorithm? What is its time complexity?
5.2.
22. Using a selection algorithm, write a program to find the kth smallest element in an unsorted array.
5.3.
23. What is the difference between a hash table and a balanced search tree, and what are the advantages and disadvantages of each approach?
5.4.
24. How do you implement a persistent data structure?
5.5.
25. What is a trie, and how is it used to solve problems?
5.6.
26. How does bubble sort work?
5.7.
27. Write a program to detect cycle in an undirected graph using (BFS). 
5.8.
28. Write a program to find the Longest Common Substring.
5.9.
29. Write a program to find the minimum path sum in a grid(matrix).
5.10.
30. Explain Bellman-Ford Algorithm.
6.
Frequently Asked Questions
6.1.
How do I prepare for a DSA interview?
6.2.
Is DSA very tough?
6.3.
How many DSA questions are enough?
6.4.
How do you solve DSA problems in an interview?
7.
Conclusion
Last Updated: Mar 27, 2024
Hard

Data Structure Interview Questions and Answers (2024)

gp-icon
Data structures & algorithms (Beginner to Intermediate)
Free guided path
13 chapters
99+ problems
gp-badge
Earn badges and level up

Introduction

Preparing for a DSA(Data Structures and Algorithms) interview requires significant time and effort. It is essential to have a strong understanding of basic programming concepts and a deep understanding of data structures and algorithms.

dsa interview questions

This article will cover the most commonly asked DSA interview questions, including arrays, linked lists, trees, graphs, sorting, searching, and dynamic programming. We will be covering these concepts through interview questions and answers. So without further ado, let’s get started.

DSA Interview Questions

It is crucial to approach DSA interview questions with a structured thought process, analysing the problem, breaking it down into smaller sub-problems, and formulating a solution. It is also important to be able to explain your thought process and reasoning to the interviewer. Practice and repetition are key to improving your problem-solving skills and increasing your chances of success in a DSA interview.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

DSA Interview Questions for Freshers

We'll start with some beginner-level DSA interview questions. These questions are widely asked in fresher interview and during the initial phases of unicorn start-ups and product based companies.

1. What is a data structure? Give an example of a commonly used data structure.

Ans: A data structure is a way of organising and storing data in a computer so that it can be accessed and used efficiently. It defines a set of rules for how data can be stored, retrieved, and manipulated. A data structure can be thought of as a container for holding data, with its own set of operations and rules.

One commonly used data structure is an array. An array is a collection of elements of the same data type that are stored in contiguous memory locations. The elements can be accessed using their index, an integer value representing the element's position in the array.

Syntax

Datatype var[];

 

Example:

int array[5];

2. What are various data structures available?

Ans: The various data structures available are:
 

  • Arrays: An array is a data structure that contains a fixed-size collection of elements and allows fast access through an index. However, its size remains static, so it cannot be dynamically resized during runtime.
  • Linked Lists: Linked lists are linear data structures where nodes are connected using pointers. This allows for dynamic sizing, which means elements can be added or removed efficiently without resizing the entire structure.
  • Stacks: Stacks are Last-In-First-Out (LIFO) data structures commonly used for managing function calls and other operations. The last element added to a stack is the first to be removed, creating a straightforward and efficient way to manage and process data.
  • Queues: Queues are First-In-First-Out (FIFO) data structures utilized for resource management in limited-capacity scenarios. 
  • Trees: Hierarchical structures with root and child nodes, such as Binary Search Trees and Binary Trees.
  • Graphs: Nodes connected by edges, used to model relationships between objects.

3. What is an algorithm? Give an example of a commonly used algorithm.

Ans: An algorithm is a set of instructions that describe a sequence of steps to be followed to solve a specific problem or accomplish a particular task. It is a finite set of rules that take inputs and produce an output in a finite amount of time. Algorithms are used in computer science, mathematics, engineering, and other fields to solve complex problems.

A commonly used algorithm is the binary search algorithm, which is used to find the position of a target value within a sorted array. This algorithm works by repeatedly dividing the search interval in half until the target value is found or the search interval is empty. The binary search algorithm has a time complexity of O(log n), making it an efficient way to search for data in large data sets.

Following syntax is used for binary search algorithm.

def binary_search(arr, target):
	low = 0
	high = len(arr) - 1

	while low <= high:
		mid = (low + high) // 2

		if arr[mid] == target:
			return mid
		elif arr[mid] < target:
			low = mid + 1
		else:
			high = mid - 1

	return -1

4. Why do we need to do algorithm analysis?

Algorithm analysis is an important process in software development because it helps us to understand how efficient solutions are, when dealing with different amounts of data. It is like checking how fast and how much memory our algorithms use. This is important because we want our programs to run quickly and not waste unnecessary resources. By analyzing algorithms, we can compare different ways of solving the same problem and choose the best one based on its performance. It also helps us find and fix performance issues in our code and design systems that can handle larger amounts of data without slowing down.

5. What is a stack, and how is it implemented?

Ans: A stack is a linear data structure that follows the Last In, First Out (LIFO) principle, meaning that the last element added to the stack is the first to be removed. It can be thought of as a stack of plates, where the last plate added is the first to be removed.

Here's an example implementation of a stack in C++:

#include <iostream>
#define MAX_SIZE 1000

using namespace std;

class Stack {
	int top;
	public:
		int arr[MAX_SIZE];

	
		Stack() {
			top = -1;
		}

	
		bool is_empty() {
			return top == -1;
		}

	
		bool is_full() {
			return top == MAX_SIZE - 1;
		}

	
		void push(int x) {
			if (is_full()) {
				cout << "Stack Overflow" << endl;
				return;
			}
			arr[++top] = x;
		}

	
		int pop() {
			if (is_empty()) {
				cout << "Stack Underflow" << endl;
				return -1;
			}
			return arr[top--];
		}

	
		int peek() {
			if (is_empty()) {
				cout << "Stack is empty" << endl;
				return -1;
			}
			return arr[top];
		}
};

int main() {
	Stack s;

	s.push(1);
	s.push(2);
	s.push(3);
	s.push(4);

	cout << "Top element is: " << s.peek() << endl;

	s.pop();
	s.pop();

	cout << "Top element is: " << s.peek() << endl;

	s.push(5);
	cout << "Top element is: " << s.peek() << endl;

	return 0;
}

 

Output:

Top element is: 4
Top element is: 2
Top element is: 5

 

The stack is defined as a class in above example. The private variable top representing the index of the top element of the stack. The is_empty and is_full methods are used to check whether the stack is empty or full.

The push method is used to add an element to the top of the stack. It first checks if the stack is full, and if not, it increments the top index and adds the element to the array.

The pop method is used to remove the top element from the stack. It first checks if the stack is empty, and if not, it returns the top element and decrements the top index.

The peek method is used to return the top element of the stack without removing it. It first checks if the stack is empty, and if not, it returns the top element.

In the main function, we create an instance of the Stack class and perform some operations on it to demonstrate its functionality.

6. What is a queue in data-structure and how is it implemented?

Ans: A queue is a linear data structure that follows the First In, First Out (FIFO) principle, meaning that the first element added to the queue is the first to be removed. It can be thought of as a line of people waiting for a service, where the first person in line is the first to be served.

Here's an example of a queue initialisation in C++:

#include <iostream>
#define MAX_SIZE 1000

using namespace std;

class Queue {
	int front, rear;
	public:
		int arr[MAX_SIZE];

		Queue() {
			front = rear = -1;
		}

		bool is_empty() {
			return front == -1 && rear == -1;
		}

		bool is_full() {
			return rear == MAX_SIZE - 1;
		}

		void enqueue(int x) {
			if (is_full()) {
				cout << "Queue Overflow" << endl;
				return;
			}
			if (is_empty()) {
				front = rear = 0;
			}
			else {
				rear++;
			}
			arr[rear] = x;
		}

		int dequeue() {
			if (is_empty()) {
				cout << "Queue Underflow" << endl;
				return -1;
			}
			int x = arr[front];
			if (front == rear) {
				front = rear = -1;
			}
			else {
				front++;
			}
			return x;
		}

		int peek() {
			if (is_empty()) {
				cout << "Queue is empty" << endl;
				return -1;
			}
			return arr[front];
		}
};

int main() {
	Queue q;

	q.enqueue(1);
	q.enqueue(2);
	q.enqueue(3);
	q.enqueue(4);

	cout << "Front element is: " << q.peek() << endl;

	q.dequeue();
	q.dequeue();

	cout << "Front element is: " << q.peek() << endl;
	q.enqueue(5);
	cout << "Front element is: " << q.peek() << endl;
	
	return 0;
}

 

Output: 

Front element is: 1
Front element is: 3
Front element is: 3

 

The queue is defined as a class. Here, the private variables front and rear representing the indices of the front and rear elements of the queue. The is_empty and is_full methods are used to check whether the queue is empty or full.

The enqueue method is used to add an element to the rear of the queue. It first checks if the queue is full, and if not, it increments the rear index and adds the element to the array.

The dequeue method is used to remove the front element from the queue. It first checks if the queue is empty, and if not, it returns the front element and increments the front index.

The peek method is used to return the front element of the queue without removing it. It first checks if the queue is empty, and if not, it returns the front element.

In the main function, we create an instance of the Queue class and perform some operations on it to demonstrate its functionality.

7. What is the difference between a stack and a queue?

Ans: The difference between a stack and a queue is given below.

Key Points

Stack

Queue

Methods or Approach used LIFO (Last In, First Out). FIFO (First In, First Out).
Insertion Add element to the top (push). Add element to the rear (enqueue).
Deletion Remove element from the top (pop). Remove element from the front (dequeue).
Access Access only the top element. Access both the front and rear elements.
Use cases Backtracking algorithms, expression evaluation, browser history. Job scheduling, printing, buffering.
Example Undo/Redo operations, Call stack in programming. Ticket queue, Message queue.

8. What is a binary search tree, and how is it implemented?

Ans: A binary search tree (BST) is a tree data structure where each node has at most two children, and the left child is always less than the parent, and the right child is greater than the parent. It allows for efficient searching, inserting, and deleting of elements in a sorted manner.

Here's an example implementation of a BST in C++:

#include <iostream>

using namespace std;

struct Node {
	int data;
	Node *left, *right;
};

class BST {
	Node *root;
	public:
		BST() {
			root = nullptr;
		}

		Node* get_root() {
			return root;
		}

		Node* create_node(int value) {
			Node *node = new Node;
			node->data = value;
			node->left = nullptr;
			node->right = nullptr;
			return node;
		}

		void insert(int value) {
			root = insert_node(root, value);
		}

		Node* insert_node(Node *node, int value) {
			if (node == nullptr) {
				return create_node(value);
			}
			if (value < node->data) {
				node->left = insert_node(node->left, value);
			}
			else {
				node->right = insert_node(node->right, value);
			}
			return node;
		}

		void inorder_traversal(Node *node) {
			if (node == nullptr) {
				return;
			}
			inorder_traversal(node->left);
			cout << node->data << " ";
			inorder_traversal(node->right);
		}

		void preorder_traversal(Node *node) {
			if (node == nullptr) {
				return;
			}
			cout << node->data << " ";
			preorder_traversal(node->left);
			preorder_traversal(node->right);
		}

		void postorder_traversal(Node *node) {
			if (node == nullptr) {
				return;
			}
			postorder_traversal(node->left);
			postorder_traversal(node->right);
			cout << node->data << " ";
		}

		bool search(Node *node, int value) {
			if (node == nullptr) {
				return false;
			}
			if (node->data == value) {
				return true;
			}
			else if (value < node->data) {
				return search(node->left, value);
			}
			else {
				return search(node->right, value);
			}
		}
};

int main() {
	BST bst;
	bst.insert(10);
	bst.insert(5);
	bst.insert(15);
	bst.insert(3);
	bst.insert(7);
	bst.insert(12);
	bst.insert(17);

	cout << "Inorder traversal: ";
	bst.inorder_traversal(bst.get_root());
	cout << endl;

	cout << "Preorder traversal: ";
	bst.preorder_traversal(bst.get_root());
	cout << endl;

	cout << "Postorder traversal: ";
	bst.postorder_traversal(bst.get_root());
	cout << endl;

	int value = 7;
	bool found = bst.search(bst.get_root(), value);
	if (found) {
		cout << value << " found in the BST." << endl;
	}
	else {
		cout << value << " not found in the BST." << endl;
	}

	return 0;
}

 

Output:

Inorder traversal: 3 5 7 10 12 15 17 
Preorder traversal: 10 5 3 7 15 12 17 
Postorder traversal: 3 7 5 12 17 15 10 
7 found in the BST.

 

In this implementation, the binary search tree is defined as a class BST with a private variable root that points to the tree's root node. The Node struct defines the structure of each node in the tree, which has a data field and pointers to the left and right child nodes. The other operations are done accordingly.

9. What is the difference between a breadth-first search and a depth-first search?

Ans: The difference between a breadth-first search and a depth-first search is given below.

Key Points 

Breadth-First Search (BFS)

Depth-First Search (DFS)

Definition

A graph traversal algorithm that visits all nodes at a given depth level before proceeding to the next level. A graph traversal algorithm that explores as far as possible along each branch before backtracking.

Data Structure

It uses a queue to keep track of the nodes to visit. It uses a stack or recursion to keep track of the nodes to visit.

Used Technique

BFS uses a queue to store nodes to be visited, starting with the initial node and exploring its neighbors before moving on. DFS uses a stack to store nodes to be visited, starting with the initial node and exploring as far as possible.

Time Complexity

O(V+E), where V is the number of vertices and E is the number of edges. O(V+E), where V is the number of vertices and E is the number of edges.

Space Complexity

O(V), where V is the number of vertices. O(V), where V is the number of vertices.

Speed

Slower if the solution is deep in the graph, as it explores all nodes at a given depth before moving to the next depth. Faster if the solution is deep in the graph, as it explores as far as possible before backtracking.

Optimality

Guarantees the shortest path to the solution, as it explores all nodes at a given depth before moving to the next depth. It may not guarantee the shortest path to the solution, as it may explore farther from the starting node before backtracking.

Memory

It may require more memory to store the queue of nodes to be visited, especially for large graphs. It may require less memory to store the stack of nodes to be visited, as it only stores the path from the starting node to the current node.

10. What is the difference between quicksort and mergesort?

Ans: Following is the difference between quicksort and mergesort.

Parameter

Quicksort

Mergesort

Sorting Algorithm Comparison sort Comparison sort
Type of Sort In-place sort Out-of-place sort
Best-case time O(n log n) O(n log n)
Average-case time O(n log n) O(n log n)
Worst-case time O(n^2) O(n log n)
Space complexity O(n*log n) O(n)
Stable Sort No Yes
Adaptive Sort No No
Suitable for Linked List No Yes, with modifications

DSA Interview Questions for Intermediate

Now, it's time for some intermediate-level DSA interview questions.

11. How do you find the maximum value in a binary tree?

Ans: To find the maximum value in a binary tree, we can traverse the tree in a certain order and keep track of the maximum value seen so far. Here's an example code in C++ to find the maximum value in a binary tree using a recursive approach:

#include<iostream>
#include<climits>  
// for INT_MIN constant

using namespace std;

class Node {
	public:
		int val;
		Node* left;
		Node* right;

		Node(int value) {
			val = value;
			left = nullptr;
			right = nullptr;
		}
};

int find_max(Node* root) {
	if (root == nullptr) {
	// return the smallest possible integer value if the tree is empty
		return INT_MIN;  
	}
	return max(root->val, max(find_max(root->left), find_max(root->right)));
}

int main() {
	// Example usage
	Node* root = new Node(1);
	root->left = new Node(2);
	root->right = new Node(3);
	root->right->left = new Node(4);
	root->right->right = new Node(5);

	int max_val = find_max(root);
	cout << max_val << endl;

	// Deallocate memory to avoid memory leaks
	delete root->left;
	delete root->right->left;
	delete root->right->right;
	delete root->right;
	delete root;
	return 0;
}

 

Output:

5

 

This code recursively searches for the maximum value in a binary tree rooted at the given ‘root’ node. It returns ‘INT_MIN’ if the tree is empty. Otherwise, it first sets ‘max_val’ to the root node's value and then recursively finds the maximum values in the left and right subtrees. Finally, it compares the subtrees' maximum values with the root node's value and returns the maximum of the three values.

12. What is dynamic programming, and how is it used to solve problems?

Ans: Dynamic programming is a problem-solving technique used in computer science and mathematics. It involves breaking down complex problems into smaller, simpler subproblems and solving them in a recursive manner. The solutions to these subproblems are then stored in a table or memoised to be used later to solve larger problems.

Dynamic programming is useful for solving problems with overlapping subproblems and optimal substructure. Overlapping subproblems occur when the same subproblem is encountered multiple times in the computation. Optimal substructure occurs when the optimal solution to a larger problem can be found by combining the optimal solutions to its smaller subproblems.

Dynamic programming is applicable to a wide range of problems, including those in graph theory, optimisation, probability, game theory, and artificial intelligence. Some examples of problems that can be solved using dynamic programming include the Fibonacci sequence, the shortest path problem, the knapsack problem, and the traveling salesman problem.

13. How do you implement a breadth-first search?

Breadth-first search is an algorithm used to traverse and search a graph or a tree data structure. Here is an implementation of the Breadth-First Search algorithm in Python:

from collections import deque

class Node:
	def __init__(self, val=None, left=None, right=None):
		self.val = val
		self.left = left
		self.right = right

def bfs(root):
	if not root:
		return []
	queue = deque([root])
	result = []
	while queue:
		curr_node = queue.popleft()
		result.append(curr_node.val)
		if curr_node.left:
			queue.append(curr_node.left)
		if curr_node.right:
			queue.append(curr_node.right)
	return result

# Example usage
root = Node(1, Node(2), Node(3, Node(4), Node(5)))
bfs_result = bfs(root)
print(bfs_result)     

 

Output:

[1, 2, 3, 4, 5]


In this program, we define a Node class that represents a node in a binary tree. Each Node has a val attribute that stores the value at that node, and left, and right attributes that point to the left and right children, respectively.

We also define a bfs function that performs a breadth-first search on the binary tree and returns a list of node values in the order they were visited. We use a deque from the built-in collections module to keep track of the nodes to be visited next. We start with the root node and add it to the queue. Then, we repeatedly dequeue the front node, add its value to the result list, and enqueue its children (if they exist) to the back of the queue.

To use this program, we create a binary tree by instantiating Node objects and setting their val, left, and right attributes. We then call bfs on the root node and output the result.

14. Given a sorted array of integers, write a program to remove all duplicates in place and return the new length of the array. What is the time complexity of your algorithm?

Ans: Following function removes all duplicates in place and returns the array's new length.

def remove_duplicates(nums):
	if not nums:
		return 0
	i = 0
	for j in range(1, len(nums)):
		if nums[j] != nums[i]:
			i += 1
			nums[i] = nums[j]
	return i + 1

# Example usage
nums = [1, 1, 2, 2, 2, 3, 4, 4, 5]
new_length = remove_duplicates(nums)
print(nums[:new_length])  
print(new_length) 

 

Output:

[1, 2, 3, 4, 5]
5

 

The algorithm works by iterating over the array and keeping track of a new_tail index that points to the last non-duplicate element in the array. When a new non-duplicate element is found, it is moved to the new_tail position, and new_tail is incremented. At the end of the loop, new_tail + 1 is returned as the length of the new array.

The time complexity of this algorithm is O(n), where n is the length of the input array. This is because the algorithm only makes a single pass over the input array and performs a constant amount of work for each element. Therefore, the algorithm has linear time complexity.

15. Given a binary search tree, find the second smallest element in the tree. What is the time complexity of your algorithm?

Ans: We can use an in-order traversal to find the second smallest element in a binary search tree. In an in-order traversal of a binary search tree, the nodes are visited in ascending order, so the second smallest element is the second node that is visited in the traversal.

Here's the Python code to find the second smallest element in a binary search tree:

class TreeNode:
	def __init__(self, val=0, left=None, right=None):
		self.val = val
		self.left = left
		self.right = right

def find_second_smallest(root):
	stack = []
	curr = root
	count = 0

	while True:
		while curr is not None:
			stack.append(curr)
			curr = curr.left

		if len(stack) == 0:
			return None

		curr = stack.pop()
		count += 1

		if count == 2:
			return curr.val
			
		curr = curr.right

 

In this code, we use a stack to perform an in-order traversal of the binary search tree. We start at the root of the tree and traverse to the leftmost node, adding each node to the stack as we go. Once we reach the leftmost node, we pop it from the stack and increment a counter. 

If the counter is equal to 2, we have found the second smallest element and return its value. Otherwise, we move to the right child of the popped node and continue the traversal.

The time complexity of this algorithm is O(h), where h is the height of the binary search tree. In a balanced binary search tree, the height is O(log n), where n is the number of nodes in the tree. However, in the worst case, the tree can be a skewed tree with all nodes in a single branch, and in this case, the time complexity is O(n), where n is the number of nodes in the tree.

16. Write a program to find merge overlapping intervals from a list of intervals. What is the time complexity of your algorithm?

Ans: To merge overlapping intervals in a list of intervals, we can follow the steps:

  1. Sort the intervals based on the start time.
     
  2. Initialise an empty result list.
     
  3. For each interval, if it overlaps with the last interval in the result list, merge the two intervals; otherwise, append the interval to the result list.
     
  4. Return the result list.
     

Here's an implementation of this algorithm in Python:

def merge_intervals(intervals):
	# sort the intervals based on start time
	intervals.sort(key=lambda x: x[0])
	
	# initialize an empty result list
	result = []
	
	# iterate through the intervals
	for interval in intervals:
		# if the result list is empty or the current interval doesn't overlap 
		# with the last interval in the result list, 
		# append the current interval to the result list
		
		if not result or interval[0] > result[-1][1]:
			result.append(interval)
		# otherwise, merge the current interval with the last interval in the result list
		else:
			result[-1] = (result[-1][0], max(result[-1][1], interval[1]))
	
	# return the result list
	return result

 

The time complexity of this algorithm is O(n log n) because of the sorting step. The merging step takes O(n) time because we iterate through each interval once. Therefore, the overall time complexity of the algorithm is O(n log n).

17. Given two sorted arrays, write a program to find the median of the merged array in O(log n) time, where n is the total number of elements in both arrays.

Ans: Following is the program to find the median of the merged array in O(log n) time.

def find_median(arr1, arr2):
	# Merge the two sorted arrays into a single sorted array
	merged_arr = []
	i = 0
	j = 0
	while i < len(arr1) and j < len(arr2):
		if arr1[i] < arr2[j]:
			merged_arr.append(arr1[i])
			i += 1
		else:
			merged_arr.append(arr2[j])
			j += 1
	merged_arr += arr1[i:]
	merged_arr += arr2[j:]

	# Find the median of the merged array
	n = len(merged_arr)
	if n % 2 == 0:
		# If the length of the merged array is even, 
		# the median is the average of the middle two elements
		mid = n // 2
		median = (merged_arr[mid - 1] + merged_arr[mid]) / 2
	else:
		# If the length of the merged array is odd, the median is the middle element
		median = merged_arr[n // 2]

	return median

 

In the program, we first merge the two sorted arrays into a single sorted array using a two-pointer approach, which takes O(n) time, where n is the total number of elements in both arrays.

We then find the median of the merged array using the formula for the median of an array. If the length of the merged array is even, we take the average of the middle two elements, and if the length of the merged array is odd, we take the middle element.

Since the two-pointer approach takes O(n) time and the formula for the median takes O(1) time, the overall time complexity of this algorithm is O(n), which is equivalent to O(log n) for large values of n.

18. Implement a stack using two queues. What is the time complexity of push and pop operations in your implementation?

Ans: To implement a stack using two queues, we can use one queue as the stack's main storage and the other as a temporary buffer to facilitate push and pop operations. Here's the implementation:

class Stack:
	def __init__(self):
		self.q1 = []
		self.q2 = []

	def push(self, value):
		# Enqueue the value to the temporary buffer
		self.q2.append(value)

		# Enqueue all values from the main queue to the temporary buffer
		while len(self.q1) > 0:
			self.q2.append(self.q1.pop(0))

		# Swap the main queue and the temporary buffer
		self.q1, self.q2 = self.q2, self.q1

	def pop(self):
		if len(self.q1) == 0:
			return None

		# Dequeue and return the top value from the main queue
		return self.q1.pop(0)

 

In the push method, we enqueue the value to the temporary buffer, then we dequeue all values from the main queue and enqueue them to the temporary buffer. Finally, we swap the main queue and the temporary buffer so that the main queue becomes the new stack.

In the pop method, we simply dequeue and return the top value from the main queue.

The time complexity of the push operation in this implementation is O(n), where n is the number of elements in the stack because we need to dequeue and enqueue all elements in the main queue in order to push a new element. 

The time complexity of the pop operation is O(1), because we only need to dequeue the top element from the main queue.

At last, let's discuss some advanced-level DSA interview questions. These questions are generally for experienced professionals and candidates appearing for technical interviews of unicorn start-ups and product-based companies.

19. How do you implement Dijkstra's algorithm, and what is its time complexity?

Ans: Dijkstra's graph search algorithm finds the shortest path between a source node and all other nodes in a weighted graph. The algorithm maintains a set of visited nodes and uses a priority queue to select the node with the shortest distance to the source.

Here's a Python implementation of Dijkstra’s algorithm in the graph:

import heapq

def dijkstra(graph, start):
	distances = {vertex: float('infinity') for vertex in graph}
	distances[start] = 0
	pq = [(0, start)]
	
	while pq:
		(current_distance, current_vertex) = heapq.heappop(pq)
		if current_distance > distances[current_vertex]:
			continue
		for neighbor, weight in graph[current_vertex].items():
			distance = current_distance + weight
			if distance < distances[neighbor]:
				distances[neighbor] = distance
				heapq.heappush(pq, (distance, neighbor))
	
	return distances
	
graph = {
	'A': {'B': 1, 'C': 4},
	'B': {'D': 3},
	'C': {'D': 2, 'E': 5},
	'D': {'E': 1},
	'E': {}
}

start = 'A'
distances = dijkstra(graph, start)
print(distances)

 

Output:

{'A': 0, 'B': 1, 'C': 4, 'D': 4, 'E': 5}

 

The code takes in a weighted graph represented as an adjacency list graph, where graph[i] contains a list of pairs representing vertex ‘i’'s neighbors and their corresponding edge weights. The code initialises an array ‘distance’ of distances to the start vertex, and sets the distance to the start vertex to 0. It then uses a priority queue to process vertices in order of increasing distance. For each vertex, the code checks all of its neighbors, updates their distance if a shorter path is found, and adds them to the priority queue for processing.

The time complexity of Dijkstra's algorithm using a priority queue is O((E+V)logV). Here, E is the number of edges, and V is the number of vertices.

20. What is a suffix tree, and how is it used to solve problems?

Ans: A suffix tree is a tree-based data structure used for efficient string processing. It represents all the suffixes of a given string as a tree, where each path from the root to a leaf node represents a unique suffix of the string.

One of the most common applications of a suffix tree is in string searching algorithms such as the Aho-Corasick algorithm or the Ukkonen's algorithm. 

Following is the implementations of suffix tree in python:

class Node:
	def __init__(self, start, end):
		self.start = start
		self.end = end
		self.children = {}

class SuffixTree:
	def __init__(self, text):
		self.text = text
		self.root = Node(-1, -1)
		self.build()

	def build(self):
		for i in range(len(self.text)):
			current = self.root
			j = i
			while j < len(self.text):
				if self.text[j] not in current.children:
					current.children[self.text[j]] = Node(j, len(self.text))
				child = current.children[self.text[j]]
				k = child.start
				while j < len(self.text) and k < child.end and self.text[j] == self.text[k]:
					j += 1
					k += 1
				if k == child.end:
					current = child
				else:
					split = Node(child.start, k)
					child.start = k
					split.children[self.text[k]] = child
					current.children[self.text[j]] = split
				if j == len(self.text):
					child.end = len(self.text)
					break

	def traverse(self, node, depth):
		if node.start == -1:
			print(depth, "<empty>")
		else:
			print(depth, self.text[node.start:node.end])
		for child in node.children.values():
			self.traverse(child, depth + 1)

	def display(self):
		self.traverse(self.root, 0)

text = "banana"
tree = SuffixTree(text)
tree.display()

 

Output:

0 <empty>
1 b
2 anana
3 a
4 na
5 <empty>
6 a

 

Once constructed, the suffix tree can solve various string-related problems, including substring searches, pattern matching, and counting the occurrences of a specific pattern or substring within the text. Suffix trees are commonly used in string searching algorithms, allowing for efficient finding of all occurrences of a set of patterns in a text.

DSA Interview Questions for Experienced

21. Write a program to find the shortest path between two nodes in a weighted graph using A* algorithm? What is its time complexity?

Ans: The following is a Python program to find the shortest path between two nodes in a weighted graph using the A* algorithm.

import heapq

def a_star(graph, start, goal, heuristic):
	"""
	Find the shortest path from start to goal using A* algorithm
	"""
	queue = [(0, start, [])]
	visited = set()
	while queue:
		(cost, node, path) = heapq.heappop(queue)
		if node in visited:
			continue
		path = path + [node]
		if node == goal:
			return (cost, path)
		visited.add(node)
		for neighbor, weight in graph[node].items():
			if neighbor not in visited:
				heapq.heappush(queue, (cost + weight + heuristic(neighbor, goal), neighbor, path))
	return float('inf')

# Example usage
graph = {
	'A': {'B': 2, 'C': 5},
	'B': {'C': 2, 'D': 5},
	'C': {'D': 1, 'E': 6},
	'D': {'E': 3},
	'E': {}
}

heuristic = lambda a, b: 0  # no heuristic, equivalent to Dijkstra's algorithm
start, goal = 'A', 'E'
cost, path = a_star(graph, start, goal, heuristic)
print(f"Shortest path from {start} to {goal}: {path} (cost = {cost})")

 

Output:

Shortest path from A to E: ['A', 'B', 'C', 'D', 'E'] (cost = 8)

 

The time complexity of the A* algorithm is proportional to the number of nodes expanded during the search. It depends on the size of the graph and the quality of the heuristic function. 

The worst-case time complexity is O(b^d), where b is the branching factor of the graph and d is the depth of the shortest path

22. Using a selection algorithm, write a program to find the kth smallest element in an unsorted array.

Ans: Following is the program to find kth smallest element in an unsorted array using a selection algorithm.

import random

def quick_select(arr, k):
	if len(arr) == 1:
		return arr[0]

	# Choose a random pivot element
	pivot = random.choice(arr)

	# Partition the array into two sub-arrays
	left = [x for x in arr if x < pivot]
	right = [x for x in arr if x > pivot]
	mid = [x for x in arr if x == pivot]

	if k < len(left):
		return quick_select(left, k)
	elif k < len(left) + len(mid):
		return mid[0]
	else:
		return quick_select(right, k - len(left) - len(mid))

# Example usage
arr = [3, 1, 4, 2, 5]
k = 3
result = quick_select(arr, k - 1)
print(result)  

 

Output:

3

 

In this implementation, we first choose a random pivot element from the array. We then partition the array into three sub-arrays: one containing elements smaller than the pivot, one containing elements equal to the pivot, and one containing elements larger than the pivot.

We then recursively call ‘quick_select()’ on the appropriate sub-array based on the value of ‘k’ relative to the sizes of the sub-arrays.

The time complexity of the QuickSelect algorithm is O(n) on average, where n is the length of the array. However, in the worst case, the time complexity can be O(n^2), which can occur when the pivot element is consistently chosen to be the maximum or minimum element in the array.

23. What is the difference between a hash table and a balanced search tree, and what are the advantages and disadvantages of each approach?

Ans: The difference between a hash table and a balanced search tree is mentioned below.

Key Points

Hash Table

Balanced Search Tree

Data Structure Type

Unordered Ordered

Implementation

Implemented with an array and a hash function Implemented with a tree structure

Search time complexity

O(1) average, O(n) worst case O(log n)

Insertion time complexity

O(1) average, O(n) worst case O(log n)

Deletion time complexity

O(1) average, O(n) worst case O(log n)

Space complexity

O(n) O(n)

Handling collisions

Requires a hash function that distributes keys uniformly to avoid collisions. It may require additional space or rehashing to resolve collisions. Balancing is done dynamically, and the tree structure ensures that keys are stored in order. No need for a hash function.

Advantages of Hash Table:

  • Constant time (O(1)) complexity for average case lookups, inserts, and deletions.
     
  • Hash tables can be used for a wide range of applications, including caching, databases, and cryptographic applications.
     
  • Hash tables have a relatively small memory footprint compared to BSTs, especially when the number of items is small. It can easily handle dynamic datasets where the number of items can change frequently.
     

Disadvantages of Hash Table:

  • Hash tables have worst-case complexity of O(n) for lookups, inserts, and deletions, where n is the number of items stored in the table. This can occur when there are many collisions, or hash function is poorly designed.
     
  • Hash tables cannot be used for operations requiring ordered elements' traversal.
     
  • Hash tables require a good hash function to distribute keys uniformly over the hash table. Collisions can occur, which reduces the efficiency of the hash table.
     

Advantages of Balanced Search Tree:

  • A balanced tree (such as AVL or Red-Black Tree) guarantees O(log n) time complexity for lookups, inserts, and deletions.
     
  • Balanced trees can be used for a wide range of applications, including databases, compilers, and network routing tables.
     
  • Balanced trees can be used for operations requiring ordered elements' traversal.
     

Disadvantages of Balanced Search Tree:

  • BSTs have a larger memory footprint compared to hash tables, especially when the number of items is small.
     
  • BSTs can be slower than hash tables for small data sets due to the overhead of maintaining the tree structure.
     
  • BSTs are less efficient than hash tables when it comes to handling dynamic datasets where the number of items can change frequently.
     

24. How do you implement a persistent data structure?

Ans: A persistent data structure is a data structure that allows the ability to maintain previous versions of the data even after modifications have been made. Implementing a persistent data structure involves creating a new version of the data structure every time a modification is made.

Here's an example implementation of a simple persistent binary tree data structure in Python using immutable data structures:

from dataclasses import dataclass
from typing import Optional

@dataclass(frozen=True)
class TreeNode:
	val: int
	left: Optional['TreeNode'] = None
	right: Optional['TreeNode'] = None

class PersistentBST:
	def __init__(self):
		self.roots = [None]

	def insert(self, val):
		new_root = self._insert_helper(self.roots[-1], val)
		self.roots.append(new_root)

	def _insert_helper(self, root, val):
		if root is None:
			return TreeNode(val=val)
		if val < root.val:
			return TreeNode(val=root.val, left=self._insert_helper(root.left, val), right=root.right)
		else:
			return TreeNode(val=root.val, left=root.left, right=self._insert_helper(root.right, val))

	def search(self, val):
		return self._search_helper(self.roots[-1], val)

	def _search_helper(self, root, val):
		if root is None:
			return False
		if root.val == val:
			return True
		if val < root.val:
			return self._search_helper(root.left, val)
		else:
			return self._search_helper(root.right, val)

# Sample usage
bst = PersistentBST()
bst.insert(5)
bst.insert(2)
bst.insert(7)
print(bst.search(5))  
print(bst.search(2))  
print(bst.search(7))  
print(bst.search(4))  

# demonstrate immutability
print(bst.roots[0])  
print(bst.roots[1])  
print(bst.roots[2])  

 

Output:

True
True
True
False
None
TreeNode(val=5, left=None, right=None)
TreeNode(val=5, left=TreeNode(val=2, left=None, right=None), right=None)

 

In this implementation, we use the dataclass decorator from Python's dataclasses module to create an immutable TreeNode class that stores the value of the node and its left and right children. We also use a list to store a sequence of root nodes, with each new root being created by calling _insert_helper and appended to the list.

The insert method inserts a new value into the BST, creating a new root node as needed, while the search method searches for a given value in the current root node. Since the TreeNode class is immutable, each time we modify a node, we create a new instance of the class with the updated values.

25. What is a trie, and how is it used to solve problems?

Ans: A trie, also known as a prefix tree or a digital tree, is a tree-like data structure used for efficient retrieval of strings with common prefixes. In a trie, each node represents a single character or a prefix of a string, and the root node represents an empty string. The edges of the trie represent characters that link nodes together, and each leaf node represents a complete word or string.

Here's an implementation of a basic trie in Python:

class TrieNode:
	def __init__(self):
		self.children = {}
		self.end_of_word = False

class Trie:
	def __init__(self):
		self.root = TrieNode()

	def insert(self, word):
		current = self.root
		for char in word:
			if char not in current.children:
				current.children[char] = TrieNode()
			current = current.children[char]
		current.end_of_word = True

	def search(self, word):
		current = self.root
		for char in word:
			if char not in current.children:
				return False
			current = current.children[char]
		return current.end_of_word

	def starts_with_prefix(self, prefix):
		current = self.root
		for char in prefix:
			if char not in current.children:
				return []
			current = current.children[char]
		return self._get_words_from_node(current, prefix)

	def _get_words_from_node(self, node, prefix):
		words = []
		if node.end_of_word:
			words.append(prefix)
		for char in node.children:
			words.extend(self._get_words_from_node(node.children[char], prefix + char))
		return words

# Sample usage
trie = Trie()
words = ["cat", "dog", "apple", "banana", "car", "card", "cab"]
for word in words:
	trie.insert(word)

print(trie.search("dog")) 
print(trie.search("ca"))   
print(trie.starts_with_prefix("ca"))    

 

Output:

True
False
['cat', 'car', 'card', 'cab']

 

This implementation creates a TrieNode class to represent each node in the Trie and a Trie class that uses these nodes to store and search for words. The insert method adds a new word to the Trie, the search method checks whether a word is present in the Trie, and the starts_with_prefix method returns a list of words that start with a given prefix.

In the sample usage, we create a new Trie, insert a list of words, and then search for the word "dog" (which returns True), search for the prefix "ca" (which returns False since there are no words that start with this prefix), and get a list of words that start with the prefix "ca" (which returns ['car''card''cat''cab']).

Check out this problem - Next Smaller Element

26. How does bubble sort work?

Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The process is repeated until the list is sorted.

Here's an example of how bubble sort works with Python code:

def bubble_sort(arr):
	n = len(arr)
	# Compare adjacent elements
	# Swap elements if they are in the wrong order
	for i in range(n - 1):
		for j in range(n - i - 1):
			if arr[j] > arr[j + 1]:
				arr[j], arr[j + 1] = arr[j + 1], arr[j]
# Example 
arr = [64, 34, 25, 12, 22, 11, 90]
bubble_sort(arr)
print("Sorted array:", arr)

 

Output:

Sorted array: [11, 12, 22, 25, 34, 64, 90]


In this code, the bubble_sort function takes an array arr as input and performs the bubble sort algorithm. The outer loop runs for n - 1 times, where n is the length of the array. Each pass compares adjacent elements using the inner loop and swaps them if they are out of order. By the end of each pass, the largest element "bubbles" up to its correct position at the end of the array. This process is repeated until the entire array is sorted.

In the example usage, the array `[64, 34, 25, 12, 22, 11, 90]` is sorted using the bubble_sort function, and the sorted array `[11, 12, 22, 25, 34, 64, 90]` is printed.

27. Write a program to detect cycle in an undirected graph using (BFS). 

BFS is a traversal technique where we visit the nodes level-wise. It first visits the nodes of the same level and then moves to the next level. 

The intuition is that we start from a node BFS level-wise. If we visit a single node twice, it means we came via two paths to end up at the same node. It implies there is a cycle in the graph.
To detect a cycle in an undirected graph using Breadth-First Search (BFS), you can maintain a visited array to keep track of visited vertices and a parent array to store the parent of each visited vertex. If you encounter a vertex that has already been visited and its parent is not the current vertex, it indicates the presence of a cycle.

Let's look at the Python implementation of the above problem.

from collections import defaultdict
def isCyclic(graph, start):
	queue = [(start, -1)]  # (vertex, parent)
	visited = set()
   
	while queue:
		vertex, parent = queue.pop(0)
		visited.add(vertex)
       
		for neighbour in graph[vertex]:
			if neighbour not in visited:
				queue.append((neighbor, vertex))
			elif neighbour != parent:
				return True
   
	return False
	
# Example 

graph = defaultdict(list)
graph[0] = [1, 2]
graph[1] = [0, 2]
graph[2] = [0, 1, 3]
graph[3] = [2]

start_vertex = 0

if isCyclic(graph, start_vertex):
	print("Graph contains a cycle.")
else:
	print("Graph does not contain a cycle.")

 

Output:

Graph contains a cycle.


In this code, the isCyclic function takes an undirected graph represented as a dictionary and a starting vertex. It performs BFS on the graph starting from the start vertex. The visited set keeps track of visited vertices. If a neighbour is already visited and is not the parent of the current vertex, it means there is a cycle in the graph.

The time complexity of the cycle detection algorithm using Breadth-First Search (BFS) in an undirected graph is O(V + E), where V is the number of vertices and E is the number of edges in the graph. In the worst-case scenario, the algorithm will visit every vertex and every edge in the graph. 

The space complexity of the algorithm is O(V), where V is the number of vertices. This is because, in the worst case, the algorithm needs to store the visited set, which can contain all vertices in the graph.

28. Write a program to find the Longest Common Substring.

The program uses dynamic programming to compute the LCS lengths and then reconstructs the LCS string using the computed lengths.

This approach utilizes a 2D table, dp, where dp[i][j] represents the length of the LCS between the first i characters of str1 and the first j characters of str2. The table is filled iteratively by comparing characters from both strings.

The program iterates through each character of str1 and str2, comparing them. If the characters are equal, the LCS length is incremented by 1. If they are not equal, the LCS length is computed as the maximum of the LCS lengths obtained by excluding either the last character of str1 or the last character of str2.

Pseudocode

  • Create a matrix dp of size (m+1) x (n+1).
     
  • Initialize the first row and the first column of the matrix dp with zeros.
     
  • Initialize two variables: maxLength to store the length of the longest common substring and endIndex to store the ending index of the longest common substring.
     
  • Start a nested loop.
    • If i == 0 or j == 0, then dp[i][j] = 0.
    • If str1[i] == str2[j], then dp[i][j] = 1 + dp[i – 1][j – 1].
       
  • Keep track of the maximum value and return it.

Let's look at the Python implementation of the following approach.

def lcsDP(string_1, string_2, M, N):
   
	# create a 2-D array named dp and intitialise all the values as zero.
	dp = [[0 for k in range(N+1)] for l in range(M+1)]

	# To store the length of the longest common substring
	count = 0

	# initialize a nested loop
	# if the characters at string_1 and string_2 matches
	# keep track of maximun element
	# if the characters did not match then the value is zero
	
	for i in range(M + 1):
		for j in range(N + 1):
			if (i == 0 or j == 0):
				dp[i][j] = 0
                
			elif (string_1[i-1] == string_2[j-1]):
				dp[i][j] = dp[i-1][j-1] + 1			
				count = max(count, dp[i][j])
                
			else:
				dp[i][j] = 0
   
	return count
   
# Driver code
if __name__ == "__main__":
   
	string_1 = "qdef"
	string_2 = "defq"
	print("The Length of Longest Common Substring is", lcsDP(string_1, string_2, len(string_1), len(string_2)))

 

Output:

The Length of that Longest Common Substring is 3


Time Complexity is O(M*N), where M and N are the lengths of the given string1 and string2. The nested loop for filling the entries of the 2-D dp table requires a time on O(M*N).

Space Complexity is  O(M*N), where M and N are the lengths of the given string1 and string2. 2-D dp array of size [M+1][N+1] is required.

29. Write a program to find the minimum path sum in a grid(matrix).

The program finds the minimum path sum in a grid represented as a 2D matrix. The minimum path sum is the smallest sum of numbers obtained by moving only right or down from the top-left cell to the bottom-right cell of the grid.

The program uses dynamic programming to compute the minimum path sums. It utilizes a 2D table, dp, where dp[i][j] represents the minimum path sum to reach the cell at coordinates (i, j).

Algorithm

  1. Initialize the first cell dp[0][0] as the value in the top-left cell of the grid.
     
  2. Initialize the first-row dp[0][j] by summing up the values in the first row of the grid and the previous cell's sum.
     
  3. Initialize the first column dp[i][0] by summing up the values in the first column of the grid and the previous cell's sum.
     
  4. Fill the rest of the dp table by taking the minimum of the previous cell's sum (from left or above) and adding the current cell's value.
     
  5. Return the value at the bottom-right cell of the dp table, which represents the minimum path sum.


Let's look at the python implementation of the following approach.

def min_path_sum(grid):
	m = len(grid)
	n = len(grid[0])
	# Create a 2D table to store the minimum path sums
	dp = [[0] * n for _ in range(m)]
	# Compute minimum path sums
	dp[0][0] = grid[0][0]
	for i in range(1, n):
		dp[0][i] = dp[0][i - 1] + grid[0][i]
	for j in range(1, m):
		dp[j][0] = dp[j - 1][0] + grid[j][0]
	for i in range(1, m):
		for j in range(1, n):
			dp[i][j] = grid[i][j] + min(dp[i - 1][j], dp[i][j - 1])
	return dp[m - 1][n - 1]
	
# Example usage
grid = [
	[1, 3, 1],
	[1, 5, 1],
	[4, 2, 1]
]
min_sum = min_path_sum(grid)
print("Minimum Path Sum:", min_sum)


Output

Minimum Path Sum: 7


The time complexity of the program is O(m * n), where m is the number of rows and n is the number of columns in the grid. This is because we fill the entire dp table by iterating through each cell once.

The space complexity of the program is O(m * n) as well since we create a dp table of the same size as the input grid.

30. Explain Bellman-Ford Algorithm.

Bellman-Ford is a dynamic programming algorithm that iteratively relaxes the edges of the graph to find the shortest distances. Bellman-Ford’s algorithm works fine with negative edges as well as it is able to detect if the graph contains a negative cycle. 

This program takes a graph represented as a dictionary, a graph, and a source vertex source. It initializes all distances as infinity except for the source vertex, which is set to 0.

The algorithm then iteratively relaxes the edges of the graph V-1 times. During each iteration, it checks if the distance from the source vertex to a neighbouring vertex can be reduced by considering the current path. If so, it updates the distance.

After V-1 iterations, the algorithm checks for negative cycles. If there is a negative cycle, it means that the graph contains a cycle with negative weight, and the algorithm cannot find the shortest distances.

Let's take a look at the python implementations of the above approach.

import sys
def bellman_ford(graph, source):
	distances = {v: sys.maxsize for v in graph}  # Initialize distances with infinity
	distances[source] = 0  # Distance from source to source is 0
	# Relax edges repeatedly
	for _ in range(len(graph) - 1):
		for u, edges in graph.items():
			for v, weight in edges.items():
				new_dist = distances[u] + weight
				if new_dist < distances[v]:
					distances[v] = new_dist
					
	# Check for negative cycles
	for u, edges in graph.items():
		for v, weight in edges.items():
			if distances[u] + weight < distances[v]:
				return None  # Graph contains negative cycle
	return distances
	
# Example
graph = {
	'A': {'B': -1, 'C': 4},
	'B': {'C': 3, 'D': 2, 'E': 2},
	'C': {},
	'D': {'B': 1, 'C': 5},
	'E': {'D': -3},
}
source_vertex = 'A'
distances = bellman_ford(graph, source_vertex)
if distances is None:
	print("Graph contains negative cycle")
else:
	print("Shortest distances from", source_vertex + ":")
	for vertex, distance in distances.items():
		print(vertex, ":", distance)

 

Output

Shortest distances from A:
A : 0
B : -1
C : 2
D : -2
E : 1


The time complexity of the Bellman-Ford algorithm is O(V * E), where V is the number of vertices and E is the number of edges in the graph. It performs relaxation for each edge V-1 times. 

The space complexity is O(V) since the algorithm stores the distances in a dictionary with V entries.

Frequently Asked Questions

How do I prepare for a DSA interview?

To prepare for a Data Structures and Algorithms (DSA) interview:

  • Review key DSA concepts and algorithms.
  • Solve practice problems on coding platforms.
  • Study common data structures and their operations.
  • Understand time and space complexity analysis.
  • Practice solving interview-style coding questions.
     

Is DSA very tough?

The difficulty level of Data Structures and Algorithms (DSA) can vary depending on an individual's background, experience, and familiarity with the concepts. With constant practice and determination, any person can be efficient in DSA.

How many DSA questions are enough?

The number of Data Structures and Algorithms (DSA) questions needed to feel prepared for interviews can vary from person to person. Some people understand a concept by just solving 2 - 3 problem while some will require more. Make sure to cover all the concepts via problem-solving.

How do you solve DSA problems in an interview?

To solve DSA problems in an interview:

  • Understand the problem requirements properly.
  • Design an efficient algorithm using appropriate data structures.
  • Write code with attention to edge cases and error handling.
  • Test the code with sample inputs and optimize if necessary.
  • Clearly communicate your thought process to the interviewer.

Conclusion

This article discussed the top DSA interview questions and their answers. We covered all aspects and topics from where you can expect questions in data structures and algorithms. 

We hope that this article has helped you with various DSA interview questions. 

You can refer to our articles on similar topics for further reading:

 

To learn more about Data Structures and Algorithms, you can enroll in our DSA in C++ Course

Happy Learning!

Previous article
Gray to Binary
Next article
Time Complexity
Guided path
Free
gridgp-icon
Data structures & algorithms (Beginner to Intermediate)
13 chapters
109+ Problems
gp-badge
Earn badges and level up
Live masterclass