Q.1 Which of the following is a key characteristic of quantum computing?
Use of transistors for processing
Superposition and entanglement
Single-core sequential execution
Static RAM usage
Explanation - Quantum computing leverages quantum bits (qubits) that can exist in multiple states simultaneously due to superposition, and can be interconnected via entanglement.
Correct answer is: Superposition and entanglement
Q.2 What does neuromorphic architecture primarily aim to mimic?
Traditional Von Neumann structure
Biological neural networks
RISC-based CPU design
Multi-core GPU execution
Explanation - Neuromorphic architectures are designed to replicate the structure and function of the human brain's neural networks for efficient AI computations.
Correct answer is: Biological neural networks
Q.3 Which memory technology is considered an emerging non-volatile memory?
DRAM
SRAM
MRAM
Cache memory
Explanation - Magnetoresistive RAM (MRAM) is a non-volatile memory technology that combines the speed of SRAM with the persistence of flash memory.
Correct answer is: MRAM
Q.4 What is the primary advantage of 3D-stacked chip architectures?
Reduced heat generation
Increased transistor count and bandwidth
Simpler fabrication process
Lower power per transistor
Explanation - 3D-stacked architectures vertically integrate multiple layers of chips, reducing communication distance and increasing bandwidth and transistor density.
Correct answer is: Increased transistor count and bandwidth
Q.5 Which architecture is designed to accelerate AI workloads specifically?
GPU
FPGA
TPU
ASIC
Explanation - Tensor Processing Units (TPUs) are specialized ASICs designed by Google to efficiently handle tensor operations commonly used in machine learning.
Correct answer is: TPU
Q.6 What does chiplet-based design aim to solve in modern processors?
Reduce fabrication cost and improve scalability
Replace cache memory
Eliminate the need for GPUs
Simplify assembly line testing
Explanation - Chiplet-based designs break a processor into smaller, reusable dies (chiplets), enabling cost-effective scaling and higher yields.
Correct answer is: Reduce fabrication cost and improve scalability
Q.7 Which trend in CPU architecture focuses on heterogeneous cores?
Big.LITTLE architecture
RISC-V core standardization
Superscalar execution
In-order execution
Explanation - Big.LITTLE architecture combines high-performance 'big' cores with energy-efficient 'LITTLE' cores to optimize power and performance dynamically.
Correct answer is: Big.LITTLE architecture
Q.8 Which of the following is a primary goal of edge computing architectures?
Reduce latency by processing data closer to source
Centralize processing in data centers
Increase clock speed of CPUs
Eliminate the need for memory hierarchy
Explanation - Edge computing pushes computation closer to data sources to minimize latency and bandwidth usage, improving real-time performance.
Correct answer is: Reduce latency by processing data closer to source
Q.9 What does RISC-V architecture emphasize?
Reduced instruction set with open-source standard
Complex instruction set with proprietary design
Large cache memory per core
Vector processing for AI only
Explanation - RISC-V is an open-standard instruction set architecture (ISA) that provides a simple, extensible, and modular design for modern processors.
Correct answer is: Reduced instruction set with open-source standard
Q.10 Which of the following is an emerging trend in memory hierarchy?
Use of traditional DRAM only
Integration of NVRAM for persistent memory
Removal of caches entirely
Single-level memory design
Explanation - Emerging architectures are incorporating non-volatile RAM (NVRAM) to create persistent memory that is faster than traditional storage but retains data without power.
Correct answer is: Integration of NVRAM for persistent memory
Q.11 What is the main challenge addressed by optical interconnects in modern architectures?
High power consumption of transistors
Communication bottleneck in multi-core systems
Low DRAM density
GPU rendering latency
Explanation - Optical interconnects use light for data transmission to overcome bandwidth and latency limitations in electrical interconnects of multi-core and many-core systems.
Correct answer is: Communication bottleneck in multi-core systems
Q.12 Which of these is a primary feature of approximate computing architectures?
Guaranteed 100% accurate computation
Trading accuracy for energy efficiency
Increasing clock frequency
Adding more cache levels
Explanation - Approximate computing architectures allow some errors in calculations to reduce energy consumption and improve performance for error-tolerant applications.
Correct answer is: Trading accuracy for energy efficiency
Q.13 What advantage do FPGA-based accelerators provide?
Fixed-function processing only
Customizable hardware for specific workloads
High latency general-purpose computation
Replaces DRAM in memory hierarchy
Explanation - FPGAs can be reprogrammed to create hardware tailored for specific computations, making them versatile for emerging AI and data processing tasks.
Correct answer is: Customizable hardware for specific workloads
Q.14 Which type of architecture allows dynamic adaptation of cores for workload?
Static multi-core
Heterogeneous multi-core
Single-core scalar
In-order superscalar
Explanation - Heterogeneous multi-core architectures include different types of cores that can be dynamically allocated depending on the workload for efficiency.
Correct answer is: Heterogeneous multi-core
Q.15 Which emerging trend focuses on minimizing energy usage per computation?
Energy-proportional computing
Superscalar execution
Single instruction multiple data
Static clock design
Explanation - Energy-proportional computing aims for energy consumption to scale linearly with the workload, improving efficiency especially in data centers.
Correct answer is: Energy-proportional computing
Q.16 What is a key feature of in-memory computing?
Processing is performed inside memory arrays to reduce data movement
Using large off-chip DRAM exclusively
Separate CPU and GPU memory accesses
Vectorized instruction execution
Explanation - In-memory computing integrates computation into memory cells to reduce energy and latency caused by frequent data transfers between memory and processor.
Correct answer is: Processing is performed inside memory arrays to reduce data movement
Q.17 Which emerging architecture leverages massive parallelism for deep learning?
GPU and TPU
RISC CPU
DRAM-only systems
FPGA for single-thread execution
Explanation - GPUs and TPUs are designed for parallel processing, making them ideal for executing large-scale deep learning models efficiently.
Correct answer is: GPU and TPU
Q.18 Which of the following is a benefit of chip stacking in 3D ICs?
Lower interconnect distance and faster communication
Reduced transistor density
Lower bandwidth
Simpler design verification
Explanation - 3D ICs stack multiple dies vertically, shortening interconnects between layers, which improves speed and reduces power consumption.
Correct answer is: Lower interconnect distance and faster communication
Q.19 Which emerging trend involves using multiple specialized accelerators together?
Heterogeneous computing
Scalar processing
In-order execution
Von Neumann single-core
Explanation - Heterogeneous computing integrates CPUs, GPUs, TPUs, and other accelerators to efficiently handle diverse workloads within a system.
Correct answer is: Heterogeneous computing
Q.20 Which architecture type supports quantum annealing for optimization problems?
D-Wave quantum computers
Traditional CPU
GPU-based systems
FPGA arrays
Explanation - Quantum annealing, used in D-Wave systems, is specialized for solving optimization problems using quantum mechanics principles.
Correct answer is: D-Wave quantum computers
Q.21 What is the primary purpose of silicon photonics in emerging architectures?
High-speed optical data transfer on-chip and between chips
Replace DRAM entirely
Enable in-memory computation
Reduce transistor gate size
Explanation - Silicon photonics uses light to transmit data at high speeds within and between chips, addressing bandwidth bottlenecks in multi-core and data center systems.
Correct answer is: High-speed optical data transfer on-chip and between chips
Q.22 Which type of architecture is designed for approximate, error-tolerant computations?
Approximate computing
Superscalar RISC
CISC in-order
Traditional DRAM-based CPU
Explanation - Approximate computing architectures intentionally relax precision for certain computations to gain efficiency in energy and speed, especially in AI and multimedia workloads.
Correct answer is: Approximate computing
Q.23 Which of the following is a characteristic of heterogeneous system-on-chip (SoC)?
Integrates different types of cores and accelerators on a single chip
Single-type core design only
No support for accelerators
Purely analog processing
Explanation - Heterogeneous SoCs combine CPU cores, GPUs, AI accelerators, and other units on a single chip to optimize performance and energy efficiency for diverse workloads.
Correct answer is: Integrates different types of cores and accelerators on a single chip
Q.24 What is one emerging trend in processor security architectures?
Incorporating hardware enclaves for secure computation
Removing cache memory
Increasing transistor size
Reducing the number of cores
Explanation - Hardware enclaves provide isolated execution environments to protect sensitive computations from attacks and unauthorized access.
Correct answer is: Incorporating hardware enclaves for secure computation
