
Introduction
The . is a computer program that lies at the core of a computer's Operating System and has total control over everything in the system.
The kernel allows hardware and software components to communicate with one another. An entire kernel uses device drivers to handle all hardware resources (such as I/O, memory, and cryptography), arbitrates resource disputes between processes, and optimizes the use of shared resources such as CPU and cache consumption file systems and network sockets.
When most computers turn on, the kernel is one of the first programs to load. It takes care of the rest of the starting process and requests for memory, peripherals, and software input/output, transforming them into CPU data-processing instructions.
Also see: Multiprogramming vs Multitasking, Open Source Operating System.
Kernel I/O Subsystem
With the computer system, we can communicate via input and output (I/O) devices. The transport of data between RAM and various I/O peripherals is referred to as I/O. We can enter data via input devices such as keyboards, mouse, card readers, scanners, voice recognition systems, and touch screens. We can acquire information from the computer by employing output devices like monitors, printers, plotters, and speakers.
The processor is not directly connected to these devices. However, the data exchanges between them are managed through an interface. This interface converts system bus signals to and from a format appropriate to the provided device. I/O registers are used to communicate between these external devices and the processor.
The kernel provides many I/O services. The kernel provides several functions that rely on the hardware and device driver infrastructure, such as caching, scheduling, spooling, device reservation, and error handling.
1. Scheduling
- The term "schedule" refers to determining an excellent sequence to perform a series of I/O requests.
- Scheduling can increase the system's overall performance, distribute device access permissions evenly among all processes, and reduce average wait times, response times, and turnaround times for I/O to complete.
- When an application makes a blocking I/O system call, the request is placed in the wait queue for that device, maintained by OS engineers.
2. Buffering
- The buffer is a section of main memory used to temporarily store or keep data sent between two devices or between a device and an application.
- Assists in dealing with device speed discrepancies.
- Assists in dealing with device transfer size mismatches.
- Data is transferred from user application memory into kernel memory.
- Data from kernel memory is then sent to the device to maintain "copy semantics."
- It prevents an application from altering the contents of a buffer while it is being written.
3. Caching
- It involves storing a replica of data in a location that is easier to reach than the original.
- When you request a file from a Web page, for example, it is stored on your hard disc in a cache subdirectory under your browser's directory. When you return to a page you've recently visited, the browser can retrieve files from the cache rather than the actual server, saving you time and reducing network traffic.
- The distinction between cache and buffer is that cache stores a copy of an existing data item, whereas buffer stores a duplicate copy of another data item.
4. Spooling
- A spool is a buffer that holds jobs for a device until it is ready to take them. Spooling regards disks as a massive buffer that can hold as many tasks as the device needs until the output devices are ready to take them.
- If the device can only serve one request at a time, a buffer retains output for a device that cannot handle interleaved data streams.
- Spooling allows a user to view specific data streams and, if wanted, delete them.
- For example, when you are using a printer.
5. Error Handling
- Protected memory operating systems can safeguard against a wide range of hardware and application faults, ensuring that each tiny mechanical glitch does not result in a complete system failure.
- Devices and I/O transfers can fail for various reasons, including transitory causes, such as when a network gets overcrowded, and permanent reasons, such as when a disc controller fails.
6. I/O Protection
- System calls are required for I/O. Illegal I/O instructions may be used by user programs to try to interrupt regular operation, either accidentally or on purpose.
- To restrict a user from performing all privileged I/O instructions. System calls must be used to accomplish I/O. Memory-mapped and I/O port memory ports both need to be secured.
You can also read about layered structure of operating system.