OpenShift Interview Questions for Freshers
Q1. What is OpenShift?
OpenShift is a Red Hat-hosted cloud development Platform as a Service (PaaS). It's an open-source solution that enables businesses to migrate their traditional application infrastructure and platform to the cloud from physical and virtual mediums. It allows a wide range of apps to be developed and deployed quickly on the OpenShift cloud platform.
Q2. What are the features of OpenShift?
The features of OpenShift are as follows:-
- Support for multiple databases and language
- Rest API support
- Extensible Cartridge System
- One-Click Deployment
- Multi Environment Support
- Standardized Developers' workflow
- Automatic Application Scaling
- Responsive Web Console
- Rich Command-line Toolset
- Provide support for remote SSH login to the application.
- Self-service on Demand Application Stack
- Remote Debugging of Applications
- Built-in Database Services
- Continuous Integration and Release Management
- IDE Integration
Q3. What are the benefits of using OpenShift?
OpenShift provides a single platform for business units to host their apps on the cloud without worrying about the operating system making it very simple to use, build, and deploy cloud-based apps. It provides managed hardware and network resources for all types of development and testing, which is one of its essential characteristics. PaaS developers using OpenShift can construct their environment based on their needs.
Q4. What are deployment strategies?
Deployment strategies are tools for improving or changing an application. Modifications can be made without any downtime, thanks to deployment strategies. The blue-green deployment technique is the most widely used deployment strategy.
Q5. Define the OpenShift command-line interface (CLI).
OpenShift CLI is a command-line interface for managing OpenShift applications which allows to control the end-to-end application lifecycle. It provides options for both basic and advanced application configuration. It also contains management, deployment, and application addition features.
Q6. What are the benefits of using DevOps tools?
DevOps tool applications are ideal for boosting the flexibility of software delivery. DevOps tools also help to increase deployment frequency and reduce failure rates. In between fixes, DevOps technologies help with faster recovery and better time management.
Q7. What is OpenShift Origin?
OpenShift Origin is the upstream community project used in OpenShift Container Platform, OpenShift Online, and OpenShift Dedicated. Origin is an application lifecycle management and DevOps tools platform built on a core of Docker container packaging and Kubernetes container cluster management. Origin is a platform for containerizing applications that are open source. The Origin project's source code is accessible on GitHub under the Apache License (Version 2.0).
Q8. What is OpenShift Online?
Red Hat's public cloud application development and hosting are called OpenShift Online. It is a service provided by the OpenShift community that allows users to swiftly build, deploy, and scale containerized applications on the cloud. It is Red Hat's public cloud application development and hosting platform, which allows for automated application provisioning, maintenance, and scalability, allowing developers to concentrate on building application logic.
Q9. Explain OpenShift Dedicated.
OpenShift Dedicated is Red Hat's managed private cluster product, based on Red Hat Enterprise Linux and built around a core of Docker-powered application containers with Kubernetes orchestration and management. The Amazon Web Services (AWS) and Google Cloud Platform (GCP) marketplaces both have OpenShift Dedicated.
Q10. What is OpenShift Enterprise?
Red Hat's OpenShift Enterprise is a Platform as a Service (PaaS) that gives developers and IT departments an auto-scaling cloud application platform for deploying new applications on secure, scalable resources with low configuration and administrative overhead. Java, Ruby, and PHP are just a few programming languages and frameworks that OpenShift Enterprise supports. The application life cycle is supported by integrated development tools such as Eclipse integration, JBoss Developer Studio, and Jenkins.
Q11. What is Route in OpenShift Container Platform?
You can use a route to host your application at a public URL(Uniform Resource Locators). Depending on the application's network security setup, it can be secure or insecure. An HTTP(Hypertext Transfer Protocol)-based route is an unsecured route that provides a service on an unsecured application port and employs the fundamental HTTP routing protocol.
Q12. What distinguishes Docker from OpenShift?
- While Docker primarily delivers projects using runtime containers, OpenShift includes a runtime container as well as a REST API, coordination, and web interfaces for deploying and controlling individual containers.
- OpenShift models functional units using cartridges, which are effectively hooks specified in shell scripts that are called during the invocation of a system call. Docker, on the other hand, uses docker images to achieve the same goal, but it necessitates a lot of human effort behind the scenes.
- For container orchestration, OpenShift employs Kubernetes, whereas Docker uses Docker swarms.
- The AUFS concept is used by Docker to improve disc and file cloning as well as write-while-sharing. However, OpenShift does not require it and is incompatible with such systems.
Q13. What are Red Hat OpenShift Pipelines?
Red Hat OpenShift Pipelines is a cloud-native continuous integration and delivery (CI/CD) system based on Kubernetes. It uses Tekton building components to automate deployments across several platforms, abstracting away the underlying implementation details. Tekton provides a set of standard custom resource definitions (CRDs) for constructing CI/CD pipelines that are transferable between Kubernetes distributions.
Q14. Explain how Red Hat OpenShift Pipelines uses triggers.
Create a full-featured CI/CD system with Triggers and Pipelines in which Kubernetes resources define the entire CI/CD process. Triggers capture and process external events, such as a Git pull request, and extract key pieces of information. When this event data is mapped to a set of parameters, a series of jobs is started that can produce and deploy Kubernetes resources and start the pipeline.
For example, we can use Red Hat OpenShift Pipelines to create a CI/CD pipeline for the application. The pipeline must be initiated for any new changes in the application repository to take effect. Triggers automate this process by catching and processing any change event, and then initiating a pipeline run to deliver the modified image.
Q15. What can OpenShift Virtualization do for you?
The OpenShift Container Platform add-on OpenShift Virtualization allows you to execute and manage virtual machine workloads alongside container workloads.
OpenShift Virtualization uses Kubernetes custom resources to introduce additional objects to your OpenShift Container Platform cluster to enable virtualization jobs.
Q16. What is Pod?
OpenShift Online uses the Kubernetes idea of a pod, which is one or more containers deployed on a single host and the smallest computational unit that can be built, deployed, and managed.
The container's equivalent of a machine instance is a pod (physical or virtual). Because each pod has its internal IP address and hence owns its own port space, containers within pods can share local storage and networking.
Q17. What is the use of admission plug-ins?
Admission plug-ins can be used to control how the OpenShift Container Platform works. After being authenticated, admission plug-ins intercept resource requests submitted to the master API and are permitted to validate resource requests and ensure that scaling laws are obeyed. Admission plug-ins are used to impose security regulations, resource constraints, and configuration requirements.
Q18. What are the multiple identity providers supported by the OpenShift Container?
The OpenShift Container Platform supports a variety of identity providers, including:
- htpasswd
- GitHub
- Keystone
- GitLab
- Basic-authentication
- LDAP
- Google
- request header
- OpenID
Q19. What is Downward API?
The Downward API allows pods to retrieve their metadata without having to use the Kubernetes API. The metadata that can be recovered and utilized to configure the running pods is as follows:
Annotations Label information about Pod CPU/memory requests and limit IP address, namespace, and pod name. Some information is accessible as files in a volume, while other information is set up as an environment variable in the pod.
Q20. What are OpenShift cartridges?
OpenShift cartridges serve as hubs for application development. Along with a pre-configured environment, each cartridge has its own libraries, build methods, source code, routing logic, and connection logic. All of these elements contribute to the smooth operation of your application.
Q21. Define labels.
Labels are used to organize, group, and select API objects. Label selectors are used by services to decide which pods to proxy to, and pods are "tagged" with labels. This allows services to refer to groups of pods, even if the pods themselves have different containers.
Labels can be found in practically any object's metadata. As a result, labels can be used to group things that are arbitrarily similar, such as all of an application's pods, services, replication controllers, and deployment configurations.
Q22. What is the difference between container and gear?
Containers feature a precise mapping system that involves one-to-one image relationships. In the case of gears, however, multiple cartridges can be combined to form a single gear. Pods fulfill the collocation idea in the case of containers.
Q23. Differentiate OpenStack and OpenShift?
The most significant difference is that OpenStack provides Infrastructure as a Service (IaaS) (IaaS). OpenStack differs from OpenShift in that it gives bootable virtual machine access to objects and block storage.
Q24. Define source-to-image strategy.
S2I stands for source-to-image, and it's a repeatable container image creation tool. It creates ready-to-run images by injecting the application source into a container image and generating a new image. The new image includes the base image, builder, and constructed source and is ready to run with the build run command. S2I enables you to make incremental builds that reuse previously downloaded dependencies, created artifacts, and other resources. Builds from source to image (S2I) can be incremental, which means they reuse artifacts from previous builds.
Q25. Define custom build strategy.
The custom build strategy allows developers to select a specific builder image that will be in charge of the entire build process. Using your builder image, you can customize your build procedure.
A custom builder image is a standard container image that includes build process logic, such as for building RPMs(RPM Package Manager) or base images. Users do not have access to custom builds by default since they have a high level of privilege. Users who can be trusted with cluster administration permissions should have access to custom builds. In addition to the source and image secrets that may be supplied to any build type, custom strategies allow you to offer an arbitrary set of secrets to the builder pod.
Q26. Enlist a few build strategies that are used in OpenShift.
The key build strategies utilised in OpenShift are as follows:
- Custom Strategy
- Source to image Strategy
- Docker Strategy
- Pipeline Strategy
Q27. How OpenShift use Docker and Kubernetes?
Kubernetes and Docker could be used as a control system for OpenShift. Many deployment pipelines are enabled by the control system, which is suitable for later usage in auto-scaling, testing, and other procedures.
Q28. What are Build configurations?
Build configuration resources assist with building configuration and control. Build configuration contains information about a specific build strategy and the location of developer-supplied artefacts such as the output image.
Q29. Name some identity providers in OAUTH.
Some identity providers in OAUTH are as follows:-
- HTTPassword
- LDAP
- Allow All
- Deny All
- Authentication
Q30. What are Init containers used for?
The OpenShift Container Platform provides init containers, which run before application containers and can contain utilities or setup scripts that aren't available in an app image. You can use an Init Container resource to do actions before the rest of a pod is deployed.
A pod can have Init Containers in addition to application containers. Init containers can be used to reorganize setup scripts and binding code.
Check out IBM Interview Experience to learn about their hiring process.
OpenShift Interview Questions for Experienced
Q31. What is the difference between gear and container?
Gear and container are both acceptable terms. Images in containers are mapped precisely using one-to-one relationships. Nonetheless, multiple cartridges can combine to form a single gear in the case of gears. Pods satisfy the collocation idea in the context of containers.
Q32. Why do we need DevOps tools?
The use of DevOps tools can greatly increase the flexibility of software delivery. Furthermore, DevOps tools aid in increasing deployment frequency and decreasing failure rates. DevOps tools also contribute to quicker recovery and better time management between fixes.
Q33. What is meant by application scaling in Openshift?
Auto-scaling is also known as pod auto-scaling in the OpenShift application. These are the two categories of utilization scaling.
1. Up (vertical scaling): Using this technique, your application remains in the same location while receiving extra resources to accommodate a greater load.
2. Out (Horizontal scaling): A number of samples of an application are created, and the application load is modified across freehubs, in order to accommodate a larger burden via level scaling.
Q34. Explain about Openshift Cil.
From the order line, OpenShift apps are managed via the OpenShift CLI. End-to-end application lifecycles are possible using OpenShift CLI. Every basic and advanced design, board, expansion, and utilization organization can be played out using the OpenShift CLI.
Q35. What is meant by features toggles?
The two forms of your element are retained for a similar codebase via feature toggles. The deployment can be separated using this technique from various server groups, outdated monoliths, configurations, and single server groups.
Q36. Explain about Haproxy on Openshift.
If your application runs on OpenShift, HAProxy sits in front of it and notifies it of any incoming connections. In order to determine which application case the association should be directed to, it parses the HTTP convention. This is significant since it enables persistent sessions for the client.
Q37. What is meant by Openshift Security?
The majority of OpenShift security is a combination of two components that fundamentally manage security requirements.
- SCC (Security Context Constraints) It is mostly used for unit limitation, which means it describes the constraints placed on a case, the actions it is allowed to carry out, and the resources it has access to within the group.
- Service Account: Service accounts are primarily used to regulate access to the OpenShift enterprise API, which is taken into account whenever an order or a solicitation is canceled from any expert or hub machine.
Q38. What is Volume Security?
Volume security refers to protecting the PVC and PV of OpenShift cluster projects. OpenShift has four main elements for managing volume access: runAsUser, fsGroup, seLinuxOptions, and Supplementary Groups.
Q39. What Are Blue/green Deployments?
The Blue/Green deployment method makes sure you have two variants of your application stacks accessible during the deployment, which reduces the amount of time it takes to complete a deployment cutover. We can quickly transition between our two active application stacks by utilizing the service and routing tiers.
Q40. What Is Deployment Pod Resources?
A pod uses resources (memory and CPU) on a node to complete a deployment. Pods automatically use all available node resources. Pods use resources up to the default container limits specified by a project, though.
Q41. What Is Rolling Strategy?
A rolling deployment gradually replaces instances of an application's older version with instances of its newer version. Before degrading the old components, a rolling deployment normally waits for new pods to reach readiness via a readiness check.
Q42. What Are Stateful Pods?
Pods can be stopped and restarted with StatefulSets, a Kubernetes feature that keeps their network address and storage intact. StatefulSets (PetSets in OCP 3.4) are still a beta feature, but complete support ought to be included in a subsequent update.
Q43. Define routes and services in Openshift.
A service in Openshift is a grouping of cohering pods, with a pod consisting of services and containers. In Openshift, the service is essentially viewed as a REST object. Routes are provided to externalize and research the services required to reach the hostname remotely. Admin command commands created allow for this.
Q44. What Is Canary Deployments?
In OpenShift Origin, all rolling deployments are canary deployments, which test a new version (the canary) before replacing all of the existing instances. The deployment and canary instances are removed if the readiness check is unsuccessful every time.
Q45. What do you know about OpenShift Kubernetes Engine?
With Red Hat's OpenShift Kubernetes Engine, you can use a production-ready Kubernetes infrastructure that is built for businesses. You have complete access to an enterprise-ready Kubernetes environment with OpenShift Kubernetes Engine, which is easy to set up and has a wide compatibility test matrix with many of the software components in your data center. The OpenShift Kubernetes Engine has the same SLAs, bug fixes, and defenses against typical flaws and vulnerabilities as the OpenShift Container Platform.
Q46. What do you understand by service mesh?
A service mesh is the web of microservices that make up applications in a distributed microservice architecture, as well as the connections between those microservices. A Service Mesh may become challenging to understand and maintain as it becomes larger and more complicated. Based on the open-source Istio project, Red Hat OpenShift Service Mesh enhances current distributed systems by adding a transparent layer without altering the service code.
Q47. What are platform operators?
The OpenShift Container Platform's operators are among its most important components. The OpenShift Container Platform's default platform Operators, commonly referred to as cluster Operators, are a collection of all cluster-related functions. Platform operators are specified by a ClusterOperator object, which cluster administrators can examine on the Administrative Cluster Settings page of the OpenShift Container Platform web portal.
Q48. What are the security-related features in the OpenShift Container Platform that are based on Kubernetes?
The Kubernetes-based OpenShift Container Platform has the following security-related features:
- Role-based access Controls and network restrictions are combined to create multitenancy, a method for isolating containers on many different levels.
- A barrier between an API and the users who utilize it to submit requests is created by admission plug-ins.
Q49. What systems are running on AWS in the OpenShift environment?
The OpenShift environment on Amazon Web Services is made up of one master node and one infrastructure node. Moreover, 24 application nodes and an NFS server are included.
Q50. What is HTTP strict transport security?
A security measure known as HTTP Strict Transport Security (HSTS) notifies the browser client that only HTTPS traffic is allowed on the route host. By alerting users to the necessity for HTTPS transport without the need for HTTP redirection, HSTS also helps to increase online traffic. You can interact with websites more rapidly, thanks to HSTS.
OpenShift MCQ Questions
1. Which of the following is a core component of OpenShift?
a) Docker
b) Kubernetes
c) Vagrant
d) VirtualBox
Answer: b) Kubernetes
2. What is OpenShift primarily used for?
a) Web development
b) Continuous Integration
c) Platform as a Service (PaaS)
d) Data storage
Answer: c) Platform as a Service (PaaS)
3. Which language is used to define the configuration in OpenShift templates?
a) XML
b) YAML
c) JSON
d) HTML
Answer: b) YAML
4. OpenShift Origin is now known as what?
a) OpenShift Enterprise
b) OKD
c) OCP
d) MiniShift
Answer: b) OKD
5. Which command is used to create a new application in OpenShift?
a) oc new-app
b) oc start-app
c) oc create-app
d) oc deploy-app
Answer: a) oc new-app
6. OpenShift supports which type of scaling?
a) Vertical scaling only
b) Horizontal scaling only
c) Both vertical and horizontal scaling
d) Neither vertical nor horizontal scaling
Answer: c) Both vertical and horizontal scaling
7. What is the default storage mechanism used in OpenShift?
a) GlusterFS
b) NFS
c) Persistent Volumes
d) Local Storage
Answer: c) Persistent Volumes
8. Which OpenShift component ensures that the deployed containers are always running?
a) Kubernetes master
b) Docker Engine
c) Scheduler
d) Controller Manager
Answer: d) Controller Manager
9. Which of the following is a networking feature provided by OpenShift?
a) Network segmentation
b) Load balancing
c) VPN integration
d) VPC creation
Answer: b) Load balancing
10. What does the command oc get pods do?
a) Deploys a new pod
b) Deletes a pod
c) Lists all the pods
d) Scales a pod
Answer: c) Lists all the pods
Conclusion
This blog contained a series of frequently asked OpenShift Interview Questions. We started with a brief introduction to OpenShift and went through the top thirty OpenShift interview questions.
After reading about the OpenShift Interview Questions, are you not feeling excited to read/explore more articles on other interview questions topic of file systems? Don't worry; Coding Ninjas has you covered. To learn, see