Unit 3
1) Define following terms related to interrupts.
a. ISR
b. IVT
c. Interrupt Priority
d. Interrupt Handler
Here are the definitions for the terms related to interrupts:
a. ISR (Interrupt Service Routine)
An Interrupt Service Routine (ISR) is a special block of code that is executed in response to an interrupt signal. When an interrupt occurs, the processor stops executing the current program and jumps to the ISR associated with that interrupt. The ISR handles the specific task related to the interrupt, such as processing input from a device or handling an error condition. After the ISR completes its task, control is returned to the interrupted program.
b. IVT (Interrupt Vector Table)
The Interrupt Vector Table (IVT) is a data structure used by the processor to manage interrupts. It is a table of pointers (addresses) that point to the ISRs for each interrupt type. Each entry in the IVT corresponds to a specific interrupt number, allowing the processor to quickly locate and execute the appropriate ISR when an interrupt occurs. In x86 architecture, the IVT is typically located at the beginning of memory, starting at address
0000:0000
.c. Interrupt Priority
Interrupt Priority refers to the order in which multiple interrupts are handled by the processor when they occur simultaneously or in quick succession. Each interrupt can be assigned a priority level, which determines how it is processed relative to other interrupts. Higher priority interrupts can preempt lower priority ones, ensuring that critical tasks are addressed promptly. This prioritization helps manage system responsiveness and ensures that important events are not delayed by less critical ones.
d. Interrupt Handler
An Interrupt Handler is another term for an Interrupt Service Routine (ISR). It refers to the specific code that runs in response to an interrupt signal. The term "interrupt handler" emphasizes its role in managing and processing interrupts, ensuring that appropriate actions are taken based on the type of interrupt received.
Summary
- ISR: Code executed in response to an interrupt.
- IVT: Table of pointers to ISRs for each interrupt type.
- Interrupt Priority: Order of handling multiple interrupts based on their importance.
- Interrupt Handler: Code that manages and processes interrupts, synonymous with ISR.
These concepts are fundamental in understanding how microprocessors handle asynchronous events and manage system resources effectively through interrupts.
2) Give difference between Interrupt and polling.
Difference Between Interrupt and Polling
Interrupts and polling are two methods used by a processor to manage events or signals from hardware devices. Here’s a comparison of the two:
Feature | Interrupt | Polling |
Definition | An interrupt is a signal that temporarily halts the CPU's current operations to allow it to respond to an event. | Polling is a method where the CPU repeatedly checks the status of an I/O device at regular intervals to see if it needs attention. |
Efficiency | More efficient for handling infrequent events because the CPU can perform other tasks until an interrupt occurs. | Less efficient, especially if the event occurs infrequently, as it wastes CPU cycles checking devices that may not need attention. |
CPU Usage | The CPU can be idle or perform other tasks while waiting for an interrupt signal. | The CPU is actively checking devices, which can lead to wasted processing time and resources. |
Response Time | Typically allows for faster response times since the CPU can react immediately when an interrupt occurs. | Response time can be slower, especially if the polling interval is long, leading to potential delays in handling events. |
Complexity | More complex to implement due to the need for an interrupt vector table and handling routines (ISRs). | Simpler to implement as it involves straightforward looping and checking conditions. |
Use Cases | Commonly used in real-time systems, device drivers, and scenarios where immediate attention is necessary (e.g., keyboard input). | Often used in simpler systems or where hardware does not support interrupts (e.g., basic embedded systems). |
Summary
- Interrupts are signals that allow devices to notify the CPU when they need processing, enabling efficient multitasking.
- Polling involves the CPU continuously checking devices for status updates, which can lead to inefficiencies.
Both methods have their use cases depending on system requirements, complexity, and performance considerations.
3) Classify Interrupt based on their types.
Classification of Interrupts Based on Their Types
Interrupts can be classified into several categories based on their source, nature, and handling. Here are the primary classifications:
1. Based on Source
- Hardware Interrupts: Generated by hardware devices (e.g., keyboard, mouse, disk drives) to signal the CPU that they require attention. Hardware interrupts can be further classified into:
- Maskable Interrupts: These can be ignored or "masked" by the CPU if it is busy or if a higher-priority interrupt occurs.
- Non-Maskable Interrupts (NMI): These cannot be ignored and are used for critical events (e.g., power failure).
- Software Interrupts: Generated by programs when they request services from the operating system or when an error occurs. These can include:
- System Calls: Requests made by a program to the operating system for services.
- Exceptions: Generated by the CPU itself in response to errors (e.g., division by zero).
2. Based on Timing
- Synchronous Interrupts: Occur as a direct result of executing an instruction in a program. For example, an exception due to an invalid operation (like dividing by zero).
- Asynchronous Interrupts: Occur independently of the current instruction being executed, usually triggered by external hardware events.
3. Based on Priority
- Prioritized Interrupts: Some systems assign priority levels to interrupts, allowing higher-priority interrupts to preempt lower-priority ones. This classification is crucial in real-time systems where certain tasks must be addressed immediately.
4. Based on Functionality
- I/O Interrupts: Triggered by I/O devices to indicate that they are ready for data transfer (e.g., a keyboard interrupt when a key is pressed).
- Timer Interrupts: Generated by a timer within the CPU to allow the operating system to perform regular tasks like scheduling.
- Debugging Interrupts: Used during debugging sessions to halt execution and allow inspection of program state.
Summary
Interrupts are classified based on their source (hardware vs. software), timing (synchronous vs. asynchronous), priority levels, and functionality (I/O, timer, debugging). Understanding these classifications helps in designing efficient systems that can respond promptly to events while managing resources effectively.
4) Explain the procedure followed by an 8086 microprocessor when any interrupt arrives.
When an interrupt arrives at the 8086 microprocessor, the processor follows a specific sequence of steps to handle the interrupt. Here’s a detailed explanation of the procedure:
Procedure Followed by the 8086 Microprocessor When an Interrupt Arrives
- Interrupt Signal Detection:
- The 8086 continuously checks for interrupt signals during its instruction execution cycle. If an interrupt signal is detected, the processor will respond to it according to its priority level.
- Completion of Current Instruction:
- The processor completes the execution of the current instruction before responding to the interrupt. This ensures that the system remains in a consistent state.
- Disable Further Interrupts (Optional):
- Depending on the configuration and type of interrupt, the CPU may disable further interrupts by clearing the Interrupt Flag (IF) in the FLAGS register. This prevents other interrupts from being serviced while the current interrupt is being handled.
- Save Context:
- The processor saves the current context (state) of the running program. This includes:
- The contents of the registers (general-purpose registers, segment registers, and FLAGS register).
- The Instruction Pointer (IP), which points to the next instruction to be executed.
- This context is usually saved on the stack.
- Determine Interrupt Vector:
- The processor uses the interrupt vector number (usually provided by the hardware or specified in a specific register) to locate the appropriate Interrupt Service Routine (ISR).
- It does this by accessing the Interrupt Vector Table (IVT), which contains pointers to ISRs for each possible interrupt.
- Load Segment and Offset:
- The processor retrieves the segment and offset address of the ISR from the IVT. This information tells it where to find the ISR in memory.
- Jump to ISR:
- The CPU jumps to the address of the ISR specified in the IVT and begins executing it. At this point, control is transferred from the main program to the ISR.
- Execute ISR:
- The ISR executes its designated tasks, which may include handling device requests, processing data, or performing error handling.
- Restore Context:
- Once the ISR has completed its tasks, it must restore the saved context from the stack. This includes restoring all registers and flags that were saved before handling the interrupt.
- Return from Interrupt:
- The CPU executes a special instruction (usually
IRET
for return from interrupt) that restores control back to the main program at the point where it was interrupted. - If interrupts were disabled at step 3, they are re-enabled at this point.
- Resume Execution:
- The processor resumes execution of the main program as if no interruption had occurred.
Summary
The 8086 microprocessor handles interrupts through a well-defined sequence: detecting an interrupt, completing current instructions, saving context, determining and executing an ISR, restoring context, and resuming normal operation. This mechanism allows for efficient management of asynchronous events while maintaining system stability and performance.
5) Explain how the address of ISR is calculated in an 8086 microprocessor.
In the 8086 microprocessor, the address of the Interrupt Service Routine (ISR) is calculated using the Interrupt Vector Table (IVT). Here’s a detailed explanation of how this process works:
Steps to Calculate the Address of ISR in the 8086 Microprocessor
- Interrupt Vector Table (IVT):
- The IVT is a table located at the beginning of memory, specifically at address
0000:0000
in the 8086 architecture. - The IVT contains 256 entries (for 256 possible interrupt types), with each entry being 4 bytes long. Each entry consists of a segment address and an offset address for the corresponding ISR.
- Interrupt Type:
- When an interrupt occurs, it is assigned a specific interrupt type number (ranging from
0
to255
). - This number identifies which ISR to execute.
- Calculating the Address:
- To find the address of the ISR corresponding to an interrupt type, the processor uses the following formula:
- Address of ISR = IVT Base Address + (Interrupt Type Number × Size of Each Entry)
- In the case of the 8086, since each entry in the IVT is 4 bytes (2 bytes for segment and 2 bytes for offset), the formula becomes:
- Address of ISR = 0x0000:0000 + (Interrupt Type Number × 4)
- Accessing the IVT:
- The processor retrieves the segment and offset from the IVT using this calculated address.
- For example, if an interrupt type number is
5
, then: - Address = 0x0000 + (5 × 4) = 0x0000 + 20 = 0x0014
- The processor then reads the segment and offset from this calculated address:
- The first two bytes at
0x0014
provide the segment address. - The next two bytes at
0x0016
provide the offset address.
- Jumping to ISR:
- Once it has both segment and offset addresses, the CPU can jump to that location in memory to execute the ISR.
- The instruction pointer (IP) is set to this new address, allowing execution to continue from there.
Summary
The address of an ISR in an 8086 microprocessor is calculated by accessing the Interrupt Vector Table (IVT), using a specific interrupt type number to find its corresponding entry. Each entry contains a segment and offset that points to the ISR. This mechanism allows for efficient handling of interrupts by directing control flow to appropriate service routines based on specific events or conditions.
6) Explain the different types of decoding techniques in memory interfacing.
In memory interfacing, decoding techniques are essential for determining how the microprocessor interacts with memory and I/O devices. Here are the different types of decoding techniques commonly used:
1. Direct Decoding
- Description: In direct decoding, the address lines from the microprocessor are directly connected to the address lines of the memory or I/O devices.
- Characteristics:
- Simple and straightforward implementation.
- Each device or memory chip is assigned a unique address range.
- Suitable for systems with a limited number of devices.
- Example: If a microprocessor has an address bus of 16 bits, it can directly address up to 65,536 memory locations or I/O ports.
2. Partial Decoding
- Description: In partial decoding, only a subset of the address lines is used to select a device or memory location.
- Characteristics:
- Reduces the number of required address lines for larger systems.
- Allows multiple devices to share an address range, which can lead to more efficient use of address space.
- It may lead to overlapping addresses if not carefully managed.
- Example: Using only the higher-order address lines (e.g., A8-A15) to select among several devices while using lower-order lines for specific addressing within those devices.
3. Full Decoding
- Description: Full decoding uses all available address lines to uniquely identify each device or memory location.
- Characteristics:
- Ensures that each device has a unique address with no overlap.
- More complex than direct or partial decoding due to additional logic circuits required.
- Suitable for systems with many devices needing distinct addresses.
- Example: In a system with multiple memory chips, full decoding ensures that each chip responds only when its specific address range is accessed.
4. Memory-Mapped I/O
- Description: In this technique, both memory and I/O devices share the same address space.
- Characteristics:
- Simplifies programming since I/O operations can be performed using standard memory instructions.
- Requires careful management of address space to avoid conflicts between memory and I/O devices.
- Example: A system where both RAM and peripheral devices like printers and displays are addressed in the same range (e.g., addresses from
0000H
toFFFFH
).
5. I/O Mapped I/O (Port-Mapped I/O)
- Description: This technique uses separate address spaces for memory and I/O devices.
- Characteristics:
- Allows for more straightforward implementation since memory and I/O operations are distinctly handled.
- The processor uses specific instructions (like
IN
andOUT
) to access I/O ports.
- Example: A system where RAM is addressed from
0000H
toFFFFH
, while I/O devices are addressed fromFF00H
toFFFFH
.
Summary
Decoding techniques in memory interfacing include direct decoding, partial decoding, full decoding, memory-mapped I/O, and I/O mapped I/O. Each technique has its advantages and disadvantages depending on system requirements, complexity, and the number of devices being interfaced. Understanding these techniques is crucial for designing efficient microprocessor-based systems.
7) Explain following decoding techniques.
a. Absolute decoding
b. Partial decoding
c. Block decoding
Here’s an explanation of the three decoding techniques in memory interfacing: Absolute Decoding, Partial Decoding, and Block Decoding.
a. Absolute Decoding
- Definition: Absolute decoding involves connecting the address lines of the microprocessor directly to the address lines of the memory or I/O devices. Each device is assigned a unique address, and the microprocessor can access each device directly through its specific address.
- Characteristics:
- Simplicity: The implementation is straightforward since each device has a dedicated address.
- Address Space: The entire address space is utilized, which means that if a device is not present at its assigned address, it will simply not respond.
- Limitations: This method can become impractical when the number of devices increases, as it requires more address lines and can lead to wasted address space if devices are not fully populated.
- Example: If a system has 16 address lines, it can directly address up to 65,536 (2^16) different memory locations or I/O ports.
b. Partial Decoding
- Definition: In partial decoding, only a subset of the address lines is used to select a device or memory location. This technique allows multiple devices to share an address range.
- Characteristics:
- Efficiency: Reduces the number of required address lines and allows for more efficient use of the available address space.
- Address Overlap: Since multiple devices may respond to the same address range, careful management is required to avoid conflicts.
- Complexity: Implementation can be more complex than absolute decoding due to the need for additional logic to manage overlapping addresses.
- Example: If a system uses only the higher-order address lines (e.g., A8-A15) for device selection, multiple devices could be assigned addresses within the same range but would need additional logic to determine which device should respond.
c. Block Decoding
- Definition: Block decoding divides the entire address space into blocks and assigns each block to a specific device or group of devices. This technique uses a combination of both absolute and partial decoding.
- Characteristics:
- Flexibility: Allows for larger systems with many devices by grouping them into blocks, which can simplify addressing.
- Reduced Complexity: Compared to absolute decoding, it reduces complexity by limiting the number of unique addresses that need to be managed directly.
- Potential for Waste: Similar to partial decoding, there may be unused addresses within blocks if not all devices are present.
- Example: A system might allocate one block of addresses (e.g.,
0000H
to00FFH
) for one type of device and another block (0100H
to01FFH
) for another type, allowing for efficient organization without needing unique addresses for every single device.
Summary
- Absolute Decoding uses all address lines directly connected to devices with unique addresses but can lead to inefficiencies in larger systems.
- Partial Decoding uses a subset of address lines, allowing multiple devices to share an address range but requiring careful management.
- Block Decoding organizes devices into blocks within the address space, combining elements of both absolute and partial decoding for flexibility and efficiency.
These decoding techniques are essential in designing effective memory interfacing systems that can efficiently manage data communication between the microprocessor and peripheral devices.
8) Give the difference between memory mapped I/O and peripheral mapped I/O technique.
Here’s a detailed comparison between Memory-Mapped I/O and Peripheral-Mapped I/O (also known as I/O Mapped I/O or Port-Mapped I/O):
Memory-Mapped I/O
- Definition:
- In memory-mapped I/O, both memory and I/O devices share the same address space. The addresses used for accessing I/O devices are part of the same address space as the main memory.
- Addressing:
- Each device is assigned a unique address in the memory address space. The CPU can use standard memory instructions to access these devices.
- Instructions:
- The same instructions used for reading from and writing to memory can also be used for I/O operations. For example,
MOV
instructions can be used to transfer data between registers and I/O devices.
- Simplicity:
- This approach simplifies programming since the same set of instructions is used for both memory and I/O operations.
- Performance:
- It can lead to faster data transfer rates, as there are no special instructions needed to handle I/O, allowing for more efficient use of the CPU.
- Examples:
- Common in modern architectures where devices like graphics cards, network cards, and storage controllers are accessed through memory addresses.
- Limitations:
- The total addressable space is shared between RAM and I/O devices, which may limit the amount of memory available if many devices are mapped into the address space.
Peripheral-Mapped I/O (I/O Mapped I/O)
- Definition:
- In peripheral-mapped I/O, a separate address space is designated specifically for I/O devices, distinct from the main memory address space.
- Addressing:
- Each device has its own unique address within this separate I/O address space, allowing for a different set of addresses dedicated solely to I/O operations.
- Instructions:
- Special instructions are used to perform input and output operations (e.g.,
IN
andOUT
instructions in x86 architecture). These instructions cannot be used for regular memory access.
- Complexity:
- This method can complicate programming since different sets of instructions are required for memory and I/O operations.
- Performance:
- It may result in slower data transfer rates compared to memory-mapped I/O due to the need for special instructions and potential additional overhead in managing two separate address spaces.
- Examples:
- Often used in older microprocessor architectures (like the Intel 8085) where peripheral devices such as printers or disk drives have their own dedicated address range.
- Limitations:
- The total number of available addresses is reduced since a separate address space must be allocated for peripherals, which can limit the number of devices that can be connected.
Summary of Differences
Feature | Memory-Mapped I/O | Peripheral-Mapped I/O |
Address Space | Shared with main memory | Separate from main memory |
Instructions Used | Standard memory instructions | Special I/O instructions (IN/OUT) |
Programming Simplicity | Simpler (same instructions) | More complex (different instructions) |
Performance | Potentially faster | May be slower due to overhead |
Device Addressing | Uses regular memory addressing | Uses dedicated I/O addressing |
Examples | Modern architectures | Older microprocessors (e.g., 8085) |
Conclusion
Both Memory-Mapped I/O and Peripheral-Mapped I/O have their advantages and disadvantages, and the choice between them depends on system design requirements, complexity, performance considerations, and the number of devices to be interfaced with the CPU.
9) Do examples of I/O and memory interfacing from PPT and will share a few more examples of it.
Here are some examples of I/O and memory interfacing techniques that you can consider based on the context of the 8086 microprocessor and general microprocessor architecture:
Examples of Memory Interfacing
- Static RAM (SRAM) Interface:
- Description: SRAM is used for cache memory and high-speed memory applications. The 8086 can interface with SRAM chips by connecting the address bus directly to the SRAM address lines and using control signals to read/write data.
- Example: A 64K x 8 SRAM chip can be interfaced with the 8086 to provide fast access to frequently used data.
- Dynamic RAM (DRAM) Interface:
- Description: DRAM is commonly used as main memory in computers. It requires periodic refreshing to maintain data integrity.
- Example: The 8086 can interface with DRAM through multiplexed address/data lines, allowing it to access larger memory sizes efficiently.
- EPROM Interface:
- Description: EPROM (Erasable Programmable Read-Only Memory) is used for storing firmware or software that does not change often.
- Example: The 8086 can read instructions from an EPROM chip during the boot process, where the EPROM is mapped into its address space.
Examples of I/O Interfacing
- Keyboard Interface:
- Description: The keyboard can be interfaced using a microcontroller or dedicated keyboard controller that translates key presses into scan codes.
- Example: The 8086 reads input from a keyboard via an I/O port, where each key press generates an interrupt that signals the CPU to read the corresponding scan code from the keyboard controller.
- Serial Port Interface:
- Description: Serial communication allows data transfer one bit at a time over a single wire or channel.
- Example: The 8086 can communicate with external devices like modems or printers through a UART (Universal Asynchronous Receiver-Transmitter) connected via specific I/O ports.
- Parallel Port Interface:
- Description: Parallel ports allow multiple bits to be sent simultaneously, making them suitable for devices like printers.
- Example: The 8086 can send data to a printer through a parallel port using specific control signals and data lines.
- ADC (Analog-to-Digital Converter) Interface:
- Description: An ADC converts analog signals into digital form, allowing the microprocessor to process real-world signals.
- Example: The 8086 can read sensor data (like temperature or light levels) by interfacing with an ADC chip, where the converted digital values are sent to the CPU via I/O ports.
- DAC (Digital-to-Analog Converter) Interface:
- Description: A DAC converts digital values back into analog signals.
- Example: The 8086 can output sound by sending digital audio data to a DAC, which then produces an analog audio signal for speakers.
Summary
These examples illustrate various methods of interfacing both memory and I/O devices with the 8086 microprocessor. Each interface type serves different purposes and utilizes specific techniques, such as direct addressing, control signals, and data buses, to facilitate communication between the CPU and peripheral devices or memory components. Understanding these interfacing techniques is crucial for designing effective microprocessor-based systems.
10) Draw and explain block diagram of 8255 programmable peripheral chip.
Block Diagram of the 8255 Programmable Peripheral Interface (PPI)
The 8255 Programmable Peripheral Interface (PPI) is a versatile device used for interfacing peripheral devices with microprocessors. It provides three 8-bit ports (Port A, Port B, and Port C) that can be configured as input or output ports, along with control logic to manage these ports.
Here’s a simplified block diagram of the 8255 PPI:
Explanation of the Components
- Control Logic:
- The control logic manages the operation of the 8255 PPI. It interprets control signals from the microprocessor and determines how data is transferred to and from the ports. It also configures the operational modes of the ports (input or output).
- Ports:
- Port A: An 8-bit port that can be configured as either an input or output port. It is often used for interfacing with devices like keyboards or sensors.
- Port B: Similar to Port A, this is another 8-bit port that can be configured for input or output. It can be used for interfacing with additional peripherals.
- Port C: This port can be divided into two 4-bit ports (C1 and C2) and is typically used for handshaking signals or control lines in communication with other devices.
- Data Bus:
- The data bus connects the ports to the microprocessor, allowing data transfer between the CPU and peripheral devices through the ports.
- Status Register:
- The status register provides information about the current state of each port (whether it is configured as input or output) and any interrupt signals that may be generated by the connected devices.
Modes of Operation
The 8255 PPI can operate in different modes:
- Mode 0: Basic Input/Output Mode – Ports operate as simple input or output ports.
- Mode 1: Strobed Input/Output Mode – Allows for handshaking signals, useful for interfacing with devices requiring synchronization.
- Mode 2: Bidirectional Bus Mode – Allows data transfer in both directions on Port A, enabling more complex communication protocols.
Conclusion
The 8255 Programmable Peripheral Interface is a crucial component in microprocessor-based systems, providing flexible interfacing options for various peripheral devices. Its ability to configure ports as input or output based on application requirements makes it a versatile choice for many embedded systems. Understanding its block diagram and operational modes is essential for effective implementation in hardware designs.
11) Draw and explain block diagram of 8254 programmable timer IC.
Block Diagram of the 8254 Programmable Timer IC
The 8254 Programmable Timer is a versatile timer chip used in various applications, including generating precise time delays and frequency generation. It has three independent 16-bit timers/counters that can be programmed for different modes of operation.
Here’s a simplified block diagram of the 8254 Programmable Timer IC:
Explanation of the Components
- Control Logic:
- The control logic manages the operation of the 8254 timer. It interprets control signals from the microprocessor to configure the timers, set modes of operation, and manage data transfer between the CPU and the timer.
- Timer/Counters:
- Timer/Counter 0: One of the three independent timers that can be configured for various timing applications. It can be used for generating periodic interrupts or delays.
- Timer/Counter 1: Similar to Timer/Counter 0, this timer can be programmed independently for different timing functions.
- Timer/Counter 2: The third timer, which also operates independently. Each timer can be set to different modes (e.g., mode 0 for interrupt on terminal count, mode 1 for hardware retriggerable one-shot, etc.).
- Output Signals:
- Each timer/counter generates an output signal when it reaches its terminal count. This signal can be used to trigger interrupts or control other devices.
- Status Register:
- The status register provides information about the current state of each timer/counter, including whether it has reached its terminal count or if it is currently running.
Modes of Operation
The 8254 can operate in several modes, including:
- Mode 0: Interrupt on Terminal Count – The timer counts down to zero and generates an output pulse.
- Mode 1: Hardware Retriggerable One-Shot – Generates a single pulse when triggered.
- Mode 2: Rate Generator – Used for generating a continuous square wave output.
- Mode 3: Square Wave Generator – Produces a square wave output signal.
- Mode 4: Software Triggered Strobe – Generates a pulse in response to software commands.
- Mode 5: Hardware Triggered Strobe – Similar to Mode 4 but triggered by hardware signals.
Conclusion
The 8254 Programmable Timer IC is a crucial component in many digital systems, providing flexible timing and counting capabilities. Its ability to operate in multiple modes makes it suitable for a wide range of applications, from simple timing tasks to complex event scheduling in embedded systems. Understanding its block diagram and operational modes is essential for effective implementation in hardware designs.