Enlog interview experience Real time questions & tips from candidates to crack your interview

Backend Developer

Enlog
upvote
share-icon
3 rounds | 5 Coding problems

Interview preparation journey

expand-icon
Journey
I began my professional journey with a strong curiosity about technology and problem-solving, which naturally led me toward backend development and system design. Over time, I have gained hands-on experience across a wide range of areas — from building robust backend applications with Python, Django, and Flask, to working with databases like PostgreSQL, DuckDB, and ClickHouse, and exploring real-time synchronization tools such as SymmetricDS and Bucardo. My career path has been shaped by solving real-world challenges, including: Developing scalable backend systems that power reliable applications Setting up distributed and fault-tolerant infrastructures (Kafka, Rook + Ceph, Kubernetes clusters on EC2) Designing tiered storage strategies combining SSDs, HDDs, and S3 to optimize performance and cost Building data synchronization pipelines to reduce database load and ensure high availability Exploring cloud-native deployments with Docker, Docker Compose, and automation tools (Boto3, Certbot) Implementing real-time monitoring and observability solutions using Prometheus, Grafana, Loki, and Django WebSockets Each project has deepened my understanding of backend engineering, scalable architecture, automation, and system resilience. What drives me is the ambition to become a highly skilled systems and backend architect — someone who not only writes efficient code, but also designs infrastructures that are fault-tolerant, scalable, and future-ready. I believe in continuous learning, whether that means exploring cutting-edge backend frameworks, databases, cloud technologies, or advanced DevOps practices. Looking ahead, my goals are to: Master cloud-native architecture and backend DevOps practices Contribute to open-source projects in the backend, database, and distributed systems ecosystem Grow into a leadership role where I can mentor teams and drive end-to-end backend and system design for complex, large-scale applications
Application story
My application journey has always been about turning knowledge into real-world impact. After building a strong foundation in backend development with Python, Django, and Flask, I began applying my skills through projects that tested both my technical ability and problem-solving mindset. One of my early applications involved developing scalable backend systems capable of handling real-time data efficiently. This led me to work with PostgreSQL, DuckDB, and ClickHouse, where I explored database optimization and real-time synchronization using SymmetricDS and Bucardo. To ensure reliability, I applied my learning to distributed and fault-tolerant setups using Kafka, Rook + Ceph, and Kubernetes on EC2. These experiences taught me not just deployment, but also system resilience under production-like conditions. Another application of my skills was in cloud-native deployments — packaging applications with Docker and Docker Compose, automating workflows with Boto3, and securing environments with Certbot. To make systems truly production-ready, I implemented real-time monitoring solutions using Prometheus, Grafana, Loki, and Django WebSockets. These applications gave me practical exposure to observability and performance tuning. Every project I worked on deepened my understanding of scalable architecture, backend engineering, and automation. More importantly, it reinforced my ambition to grow as a systems and backend architect who can design infrastructures that are fault-tolerant, future-ready, and impactful.
Preparation
Duration: 6 Months
Topics: I prepared for my professional journey through a combination of structured learning, hands-on projects, and continuous exploration, strong foundations — I started by building a solid base in Python and backend frameworks (Django, Flask) along with core DBMS concepts, this gave me confidence in backend problem-solving, practical projects — instead of only theory, I focused on real-world projects such as deploying applications with Docker and Kubernetes, setting up Kafka and distributed systems to understand scalability, experimenting with PostgreSQL, DuckDB, and ClickHouse for different use cases, system design and DevOps exposure — I explored cloud-native architecture, automation (Boto3, Certbot), and monitoring stacks like Prometheus, Grafana, and Loki to learn how backend systems are maintained in production, learning from challenges — every project came with failures (cluster misconfigurations, sync issues, scaling bottlenecks), solving these taught me more than tutorials ever could, continuous learning — I stay updated by following documentation and open-source projects, practicing interview-style backend/system design questions, experimenting locally with tools such as SymmetricDS, Bucardo, and Ceph to simulate real-world setups, future preparation — I have set goals to dive deeper into cloud-native DevOps, advanced backend patterns, and open-source contributions.
Tip
Tip

Tip 1: Start with strong backend fundamentals in Python, Django, and databases to build a solid foundation.
f cTip 2: Work on real-world projects where you deploy applications using Docker, Kubernetes, and monitoring tools.
Tip 3: Continuously practice system design and explore distributed systems such as Kafka, Ceph, and SymmetricDS.

Application process
Where: Linkedin
Eligibility: 60% throughout (Salary Package: 7.2 LPA)
Resume Tip
Resume tip

Tip 1: Always tailor your resume to the specific job role by highlighting relevant skills and projects.

Tip 2: Use clear action verbs and quantify achievements (e.g., “Optimized queries to reduce database load by 40%”).

Interview rounds

01
Round
Easy
Online Coding Interview
Duration60 minutes
Interview date1 Sep 2022
Coding problem3

1. DBMS

  • What are the advantages of using a DBMS? (Learn)
  • What are rows and columns in a DBMS? 
  • What is normalization, and why is it important in a DBMS? (Learn)
  • Explain the concept of a schema in a DBMS. (Learn)
  • What is an index in a DBMS, and how is it used?
Problem approach

1. Data Integrity: Ensures that the data is accurate and consistent.
Data Security: Provides controlled access to sensitive data by setting permissions for different users.
Efficient Data Retrieval: Optimizes queries and indexing, allowing faster data retrieval.

Reduced Redundancy: Avoids duplicate data by enforcing normalization.
Backup and Recovery: Offers automatic backup and recovery mechanisms.
Concurrent Access: Allows multiple users to access the database at the same time without conflicts.

2. Rows (Tuples): A row represents a single record or entity. Each row contains values for each attribute (column).
Columns : A column represents a property or characteristic of the entity. Each column has a data type, such as integer, string,
 

3. Normalization is the process of organizing data in a way that reduces redundancy and dependency. It involves dividing large tables into smaller ones and defining relationships between them to ensure data integrity.
Importance:
Eliminates redundant data.
Prevents anomalies during data operations (insertion, update, deletion).
Improves data integrity and consistency.
 

4. A schema in DBMS is the structure that defines the organization of data in a database. It includes tables, views, relationships, and other elements. A schema defines the tables and their columns, along with the constraints, keys, and relationships.

5. An index is a data structure that improves the speed of data retrieval operations on a database table. It works like a table of contents in a book, allowing the database to quickly find the location of a record based on a column value.

2. Operating System

  • What is a process, and what is a process table?
  • What is thrashing? (Learn)
  • What is virtual memory? (Learn)
  • What is a kernel? (Learn)
Problem approach

1. A process is an instance of a program in execution. For example, a Web Browser is a process, and a shell (or command prompt) is a process. The operating system is responsible for managing all the processes that are running on a computer and allocates each process a certain amount of time to use the processor. In addition, the operating system also allocates various other resources that processes will need, such as computer memory or disks. To keep track of the state of all the processes, the operating system maintains a table known as the process table. Inside this table, every process is listed along with the resources the process is using and the current state of the process.

2. Thrashing is a situation in which the performance of a computer degrades or collapses. Thrashing occurs when a system spends more time processing page faults than executing transactions. While processing page faults is necessary in order to appreciate the benefits of virtual memory, thrashing has a negative effect on the system. As the page fault rate increases, more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault.

3. creates an illusion that each user has one or more contiguous address spaces, each beginning at address zero. The sizes of such virtual address spaces are generally very high. The idea of virtual memory is to use disk space to extend the RAM. Running processes don't need to care whether the memory is from RAM or disk. The illusion of such a large amount of memory is created by subdividing the virtual memory into smaller pieces, which can be loaded into physical memory whenever they are needed by a process.

4. A kernel is the central component of an operating system that manages the operations of computers and hardware. It basically manages operations of memory and CPU time. It is a core component of an operating system. Kernel acts as a bridge between applications and data processing performed at the hardware level using inter-process communication and system calls.

3. Count Even Or Odd

Hard
30m average time
60% success
0/120
Asked in companies
UberOlaExpedia Group

Tanmay and Rohit are best buddies. One day Tanmay gives Rohit a problem to test his intelligence and skills. He gives him an array of N natural numbers and asks him to solve the following queries:-

Query 0 :

0 x y

This operation modifies the element present at index x to y.

Query 1 :

1 x y 

This operation counts the number of even numbers in range x to y inclusive.

Query 2 :

2 x y 

This operation counts the number of odd numbers in range x to y inclusive.

Try solving now
02
Round
Easy
Online Coding Interview
Duration75 minutes
Interview date5 Sep 2022
Coding problem1

1. Swap Two Numbers

Easy
10m average time
0/40
Asked in companies
CIS - Cyber InfrastructureErnst & Young (EY)Cybage Software

You are given two numbers 'a' and 'b' as input.


You must swap the values of 'a' and 'b'.


For Example:
Input: 
'a' = 8, 'b' = 5

Output:
5 8

Explanation:
Initially, the value of 'a' and 'b' is 8 and 5, respectively.

After swapping, the value of 'a' is 5, and the value of 'b' is 8.
Try solving now
03
Round
Easy
HR Round
Duration55 minutes
Interview date16 Sep 2022
Coding problem1

1. HR Questions

  • Tell me about yourself.
  • What are your strengths?
  • Why do you want to work here?
  • What are your weaknesses?
  • How do you handle stress and pressure?
  • Do you prefer working alone or in a team?
  • How do you handle failure?
  • What do you know about our company?

Here's your problem of the day

Solving this problem will increase your chance to get selected in this company

Skill covered: Programming

What is recursion?

Choose another skill to practice
Similar interview experiences
company logo
SDE - 1
4 rounds | 8 problems
Interviewed by Amazon
8518 views
0 comments
0 upvotes
Analytics Consultant
3 rounds | 10 problems
Interviewed by ZS
907 views
0 comments
0 upvotes
company logo
SDE - Intern
1 rounds | 3 problems
Interviewed by Amazon
3320 views
0 comments
0 upvotes
company logo
SDE - 2
4 rounds | 6 problems
Interviewed by Expedia Group
2581 views
0 comments
0 upvotes