Code360 powered by Coding Ninjas X Code360 powered by Coding Ninjas X
Table of contents
ElasticSearch interview questions
Frequently Asked Questions
Explain NRT in relation to Elasticsearch?
Provide a list of ELK log analytics use cases?
What is the purpose of Elastic Stack Reporting?
Last Updated: Jun 12, 2024

ElasticSearch Interview Questions

Author Prerna Tiwari
0 upvote
Master Power BI using Netflix Data
Ashwin Goyal
Product @
18 Jun, 2024 @ 01:30 PM


Elasticsearch is an open-source, RESTful, scalable, document-based search engine based on the Apache Lucene library. It stores, retrieves and manages textual, numerical, geospatial, structured, and unstructured data as JSON documents via the CRUD REST API or ingestion tools like Logstash.

If you are preparing for an ElasticSearch interview and want a quick guide before your interview, you have come to the right place.

This blog features the top 30 ElasticSearch Interview Questions. Now, without wasting time, let's get started with some crucial ElasticSearch interview questions.

ElasticSearch interview questions

1. Explain Elasticsearch briefly.

Apache Lucene Elasticsearch engine is a database that stores, retrieves, and manages document-oriented and semi-structured data. It offers real-time search and analytics for structured and unstructured text and numerical and geographic data.


2. Provide step-by-step instructions for starting an Elasticsearch server.

The following steps describe the procedure:

  • Click on the Windows Start button in the bottom-left corner of the desktop screen.


  • To open a command prompt, type command or cmd into the Windows Start menu and press Enter.


  • Change to the bin folder of the Elasticsearch folder that was created after it was installed.


  • To start the Elasticsearch server, type /Elasticsearch.bat, and press Enter.


  • This will launch Elasticsearch in the background from the command prompt. Open a browser and type http://localhost:9200 into the address bar.   This should show the name of the Elasticsearch cluster and other meta values related to its database."

3. Explain Elasticsearch Cluster.

It is a networked group of one or more node instances in charge of task distribution, searching, and indexing across all nodes.


4. What is an Elasticsearch Node?

A node is an instance of Elasticsearch. Master nodes, Data nodes, Client nodes, and Ingest nodes are the various node types.

These are explained below:

  • Data nodes store data and perform CRUD (Create/Read/Update/Delete), search, and aggregations.


  • Master nodes assist in cluster configuration and management by adding and removing nodes.


  • Client nodes send data-related requests to data nodes and cluster requests to the master node and Ingest nodes for document pre-processing before indexing.

5. In an Elasticsearch cluster, what is an index?

Unlike a relational database, an Elasticsearch cluster can contain multiple indices. These indices can be of various types (tables). The types (tables) include multiple Documents (records/rows), and these documents include Properties (columns).


6. Explain Mapping in Elasticsearch?

Mapping is the process of specifying how a document and the fields contained within it will be stored and indexed. Each document is made up of fields, each with its data type. When mapping data, you create a mapping definition, including a list of fields relevant to the document. Metadata fields, such as the _source field, are included in a mapping definition to customize how a document's associated metadata is handled.

7. What is a Document in the context of Elasticsearch?

A document is a JSON file that is stored in Elasticsearch. It corresponds to a row in a relational database table.


8. Could you explain SHARDS in terms of Elasticsearch?

As the number of documents grows, hard disc capacity and processing power become insufficient, and responding to client requests becomes more difficult. In this case, the process of dividing indexed data into small chunks is known as Shards, and it improves results retrieval during data search.


9. Define REPLICA, and what is the benefit of making a duplicate?

A replica is an exact copy of the Shard, which is used to boost query throughput or achieve high availability during peak loads. These replicas aid in the efficient management of requests.


10. Explain how to add or create an index in Elasticsearch Cluster.

To create a new index, use the create an indexed API option. The index configuration, index field mapping, and index aliases are required to create the index.


11. Explain the concepts of relevancy and scoring in Elasticsearch.

When you do an internet search, say, Apple. It could show the search results for fruit or a company named Apple. You might want to buy fruit online, look up a recipe from the fruit, or learn about the health benefits of eating fruit or apple. In contrast, you may want to visit to learn about the company's latest product offerings, Apple Inc.'s stock prices, and how a company has performed on the NASDAQ in the last six months, one year, or five years.

Similarly, when we search Elasticsearch for a document (a record), you are interested in finding the relevant information. The Lucene scoring algorithm calculates the probability of receiving relevant information based on relevance. The Lucene technology aids in searching a specific record, i.e., a document, based on the frequency of the term appearing in the document, how frequently it appears across an index, and the query designed using various parameters.


12. What are the various methods for searching in Elasticsearch?

The following are the various methods for searching in Elasticsearch:

  1. Using a search API across multiple types and indices: We can search an entity across multiple types and indices using the Search API.
  2. Request for a search using a Uniform Resource Identifier: We can search for requests using parameters in conjunction with the URI or Uniform Resource Identifier.
  3. Query DSL (Domain Specific Language) search within the body: For the JSON request body, DSL, or Domain Specific Language, is used.


13. What are the different types of queries supported by Elasticsearch?

There are two types of queries: full text or match queries and term-based queries.

  • The search term is not examined by the term query. Only the precise term we supply will be used in the term enquiry. This means that when searching text fields, the term query may produce poor or no results.


  • To locate documents based on specific values in structured data, we can utilize term-based queries. Date ranges, IP addresses, prices, and product IDs are a few examples of organized data. Term-level queries don't look up search phrases like full-text queries do. Rather, term-level inquiries correspond to the precise phrases kept in a field.


14. Explain how aggregation works in Elasticsearch?

Aggregations aid in the collection of data from the search query. Metrics, averages, minimums, maximums, sums, and statistics are some examples of aggregations.


15. Can you compare term-based queries and full-text queries?

Domain-Specific Language (DSL) Elasticsearch queries, also known as Full-text queries, use the HTTP request body and have the advantage of being clear and detailed in their intent, making it easier to tune these queries over time.

Term-based queries use the inverted index, a hash map-like data structure that aids in the location of text or strings from the body of an email, keywords, numbers, dates, and so on


16. What are Elasticsearch's data storage capabilities?

Elasticsearch is a search engine for storing and searching complex data structures that have been indexed and serialized as JSON documents.


17. What is an Elasticsearch Analyzer?

Analyzers are used for text analysis; they can be built-in or custom analyzers. The analyzer comprises one or more Character filters, one or more Tokenizers, and one or more Token filters. Character filters break down a string or numerical stream into characters by removing HTML tags, searching the string for keys, replacing them with the related value defined in the mapping char filter, and replacing characters based on a specific pattern.

Tokenizer divides a string stream into characters, such as whitespace. When encountering whitespace between characters, the tokenizer breaks the string stream. Token filters convert these tokens to lowercase and remove stop words from strings such as 'a, 'an', etc.


18. Provide a list of different types of Elasticsearch analyzers?

The types of Elasticsearch analyzers are listed below:

  1. Standard Analyzer: This type of analyzer is equipped with a standard tokenizer, which divides the string stream into tokens based on the maximum token length configured, a lower case token filter, which converts the token to lower case, and a stops token filter, which removes stop words such as 'a,' 'an,' and 'the.'
  2. Simple Analyzer: This analyzer converts a string stream into a text token whenever it encounters numbers or special characters. All text tokens are converted to lowercase characters by a simple analyzer.
  3. Whitespace Analyzer: This analyzer converts a string stream into a text token when it encounters white space between these strings or statements. It keeps the token case as in the input stream.
  4. Stop Analyzer: This type of analyzer is similar to the simple analyzer, but it also removes stop words from the string stream, such as 'a', 'an', and 'the.' 
  5. Keyword Analyzer: This analyzer returns the entire string stream as a single token in its original form. Adding filters to this analyzer can be transformed into a custom analyzer.
  6. Pattern Analyzer: This analyzer divides a string stream into tokens based on the regular expression. This regular expression operates on the string stream rather than the tokens.
  7. Fingerprint Analyzer: The fingerprint analyzer converts a string stream to lower case, removes extended characters, sorts, and concatenates the results into a single token.
  8. Language Analyzer: This type of analyzer is used to analyze specific language texts. Plug-ins can support language analyzers. Kuromoji for Japanese, Stempel, Ukrainian Analysis, Nori for Korean, and Phonetic plug-ins are among them. There are additional plug-ins for Indian and non-Indian languages, such as Asian languages (Japanese, Vietnamese, and Tibetan).


19. How does Elasticsearch Tokenizer work?

Tokenizers accept a string stream, split it into individual tokens, and display the output as a collection/array of these tokens. Tokenizers are classified into word-oriented, partial, and structured text tokenizers.


20. How do Elasticsearch Filters work?

Token filters receive text tokens from the tokenizer and can manipulate them to compare the tokens for search conditions. These filters compare tokens to the searched stream and return a Boolean value, such as false or true.

The comparison can be if the value for the searched condition:-

  • matches with filtered token texts, OR does not match, 
  • matches with one of the filtered token texts, OR does not match any of the specified tokens, 
  • matches the token text value is within a given range OR is not within a given range, 
  • matches the token texts that exist in the search condition or do not exist in the search condition.


21. How does an Elasticsearch ingest node work?

Before indexing, the ingest node processes the documents using a series of processors that sequentially modify the document by removing fields, followed by another that renames the field value. This helps normalize the document and speed up indexing, resulting in faster search results.


22. What are the functions of Elasticsearch attributes like enabled, index, and store?

When we need to retain and store a specific field from indexing, we use Elasticsearch's Enabled attribute. This is accomplished by incorporating "enabled": false syntax into the top-level mapping and object fields.

Elasticsearch's index attribute determines three ways a string stream can be indexed.

  • 'analyzed' specifies that the string will be analyzed before being indexed as a full-text field
  • 'not analyzed' indexes a string stream to make it searchable without analyzing it.
  • 'no' – indicates that the string will not be indexed and will not be searchable.

Elasticsearch stores the original document on a disc whether the attribute store is false and searches as quickly as possible.


23. What is the purpose of a character filter in Elasticsearch Analyzer?

The character filter in the Elasticsearch analyzer is optional. These filters manipulate the string's input stream by replacing the text token with the value mapped to the key.

We can use mapping character filters with parameters such as mappings and paths. The mappings files contain an array of keys and corresponding values, whereas the mappings path is the path registered in the config directory that shows the presence of the mappings file.


24. What are the benefits of REST API in relation to Elasticsearch?

REST API is a communication method between systems that uses the hypertext transfer protocol to send data requests in XML and JSON format.

The REST protocol is stateless and separates the user interface from the server and storage data, resulting in improved portability of the user interface with any platform. It also improves scalability by allowing components to be implemented independently, making applications more flexible to work with.

Except for the language used for data exchange, REST API is platform and language independent.


25. Explain the functionality and significance of installing X-Pack for Elasticsearch?

X-Pack is an extension that comes with Elasticsearch. Privileges/Permissions, Security (Role-based access, Roles, and User security), monitoring, reporting, alerting, and many other features are included in X-Pack.


26. What is the purpose of the cat API in Elasticsearch?

The cat API commands provide an analysis, overview, and health of an Elasticsearch cluster, including data on aliases, allocation, indices, and node attributes, to name a few. These cat commands take a query string as a parameter and return headers and their associated data from the JSON document.


27. Where and how will Kibana be helpful in Elasticsearch?

Kibana is a component of the ELK Stack – a log analysis solution. It is an open-source visualization tool that analyzes ever-growing logs in various graph formats such as line, pie-bar, coordinate maps, etc.


28. How does log stash work with Elasticsearch?

Log stash is an open-source ETL server-side engine that ships with ELK Stack and collects and processes data from various sources.

Must read topic: Kotlin Interview Questions


29. Explain the ELK Stack and its contents in detail.

Today's businesses, large and small, receive information in the form of reports, data and customer follow-ups, historical, current orders, and customer reviews from online and offline logs. It is critical to store and analyze these logs to predict valuable feedback for businesses.

Reading Recommendation =>> ELK Stack Tutorial

An inexpensive log analysis tool is required to maintain these data logs. ELK Stack is a set of search and analysis tools such as ElasticSearch, collection and transformation tools such as log stash, visualization and data management tools such as Kibana, log parsing and collection with Beats, and monitoring and reporting tools such as X Pack.


30. How do Beats work with Elasticsearch?

Beats is an open-source tool that transports data directly to Elasticsearch or Log stash, where it can be filtered or even processed before being viewed with Kibana. Cloud data, audit data, log files, network traffic, and window event logs are examples of transported data.

We hope these ElasticSearch interview questions would have helped you a lot. Now, let’s see some FAQs on the same.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job

Frequently Asked Questions

Explain NRT in relation to Elasticsearch?

Elasticsearch is the quickest search platform, with a latency (delay) of only one second between the time we index the document and the time it becomes searchable; thus, Elasticsearch is a Near Real-Time (NRT) search platform.

Provide a list of ELK log analytics use cases?

The following are successful ELK log analytics use cases:

  • Compliance
  • E-commerce solution to the search
  • Detection of Fraud
  • Intelligence on the Market
  • Risk administration
  • Analysis of security

What is the purpose of Elastic Stack Reporting?

The Reporting API allows you to retrieve data in PDF, image PNG, and spreadsheet CSV formats, which can then be shared or saved as needed.


We have extensively discussed the ElasticSearch interview questions in this article in detail. This blog features 30 ElasticSearch Interview Questions. 

After reading about the ElasticSearch interview questions, are you not feeling excited to read/explore more articles on the topics? Don't worry; Coding Ninjas has you covered. To learn about java interview questionscpp interview questionsembedded C interview questions, and for many more interview questions check this out. 

Also see, Interview questions for freshers

For peeps out there who want to learn more about Data Structures, Algorithms, Power programming, JavaScript, or any other upskilling, please refer to guided paths on Coding Ninjas Studio. Enroll in our courses, go for mock tests, solve problems available, and interview puzzles. Also, you can focus on interview stuff- interview experiences and an interview bundle for placement preparations. 

Do upvote our blog to help other ninjas grow. 

Happy Coding!

Previous article
CSS Interview Questions
Next article
Top Entity Framework Interview Questions in 2023
Live masterclass