Tip 1: Understand problem logic, not just solutions (arrays, strings, sliding window).
Tip 2: Practice trees, linked lists, hashmap by coding them from scratch.
Tip 3: Build projects using Node.js, JavaScript, and Flask for hands-on backend skills.
Tip 1: Be sure to know your projects thoroughly and be honest about them.
Tip 2: Include relevant projects that effectively showcase your skills and experience.

Step 1: I first thought of just using Kruskal’s algorithm to find MST. That gave me the minimum cost tree.
Step 2: The interviewer asked about the second MST, so I realized I need to explore alternative edges.
Step 3: My approach would be:
Build MST using Kruskal’s.
For each non-MST edge, try adding it → this creates a cycle.
Remove the maximum weight edge in that cycle → compute new cost.
Keep track of the minimum among these costs greater than MST.
Step 4: Due to time, I wasn’t able to code the full solution, but I knew the approach.



The diameter of a binary tree is the length of the longest path between any two end nodes in a tree.
The number of edges between two nodes represents the length of the path between them.
Input: Consider the given binary tree:

Output: 6
Explanation:
Nodes in the diameter are highlighted. The length of the diameter, i.e., the path length, is 6.
Step 1: At first, I tried brute force → calculate height for every node’s left & right, then take max. This was O(n²).
Step 2: Interviewer asked me to optimize, so I modified DFS:
At each node, return height.
While doing so, also update a global max for leftHeight + rightHeight.
Step 3: This gave me an O(n) solution.
Result: Interviewer was happy with the optimized approach.

Step 1: I first thought of brute force DFS to count all paths, but realized it could explode in complexity.
Step 2: Interviewer hinted about DP with Topological Sort. The idea is:
Do a topological ordering.
Use DP: dp[v] = sum(dp[u]) for all edges u→v.
Step 3: I wasn’t able to implement within time, but I knew the direction.
Suppose you are working at Twitter and you want to fetch tweets within a certain proximity (say 5 km radius) of a given location. How would you approach solving this problem?
Tip 1: I first thought of storing tweets with their latitude/longitude values in a hashmap keyed by location buckets. This way, retrieval within an area becomes easier.
Tip 2: Then I realized for accurate proximity search, we’d need to use spatial indexing instead of plain hashmap. I mentioned data structures like Quadtrees or Geohash for dividing locations into grids.
Tip 3: To further optimize, I suggested that we could use a database with geospatial indexing (like PostgreSQL with PostGIS or MongoDB’s geo-queries). These can efficiently query points within a given radius using something like a Haversine formula for distance.
Tip 4: For scalability (since Twitter has huge data), I added that tweets could be partitioned and indexed geographically, so the system queries only relevant partitions instead of scanning all tweets.

Input: Let the binary tree be:

Output: [2, 7, 5, 2, 6, 5, 4, 11, 9]
Explanation: The nodes having the same 'x' coordinates are:
'x' = -2 : [2]
'x' = -1 : [7, 5]
'x' = 0 : [2, 6]
'x' = 1 : [5, 4, 11]
'x' = 2 : [9]
Here 4 and 11 have the same 'x' and 'y' coordinates. So they are considered in non-decreasing order.
Step 1: At first, I thought of doing a simple BFS/DFS and grouping by depth, but that wouldn’t separate vertical columns.
Step 2: Then I realized I need to map each node with its horizontal distance (column index).
Left child → col - 1
Right child → col + 1
Step 3: I used a BFS traversal with a queue, where each entry stores (node, row, col).
Maintain a hashmap → col → [(row, value)].
After traversal, sort each column first by row, then by value.
Step 4: Finally, I collected all columns in ascending order of col index and built the output list.
Imagine you are building a notification service like Twitter. When a user sends a message, it should be delivered instantly to all devices of the receiver. How would you design this system to handle millions of users concurrently?
Tip 1: My first thought was a simple client-server model, where the server stores the message and pushes it to the receiver. But this wouldn’t scale well with millions of concurrent connections.
Tip 2: Then I suggested using WebSockets to maintain a persistent connection between server and client. This allows real-time push instead of polling.
Tip 3: To ensure scalability, I proposed:
Use a message queue (Kafka/RabbitMQ) to handle spikes in message traffic.
Maintain user sessions in a distributed cache (like Redis) to quickly identify active devices.
Use load balancers to distribute connections across multiple servers.
Tip 4: For reliability:
If the receiver is offline, messages should be stored in a persistent DB (like Cassandra or DynamoDB).
Once the user comes online, the system pushes all pending messages.

Here's your problem of the day
Solving this problem will increase your chance to get selected in this company
What is recursion?