Algorithms and Problem Solving # MCQs Practice set

Q.1 What is the worst-case time complexity of binary search in a sorted array of size n?

O(n)
O(log n)
O(n log n)
O(1)
Explanation - Binary search divides the search interval by half in each step, leading to a logarithmic time complexity.
Correct answer is: O(log n)

Q.2 Which algorithm design technique solves a problem by breaking it into smaller sub-problems and combining their results?

Greedy
Divide and Conquer
Dynamic Programming
Backtracking
Explanation - Divide and Conquer splits a problem into sub-problems, solves them recursively, and combines their solutions.
Correct answer is: Divide and Conquer

Q.3 What is the main difference between greedy algorithms and dynamic programming?

Greedy always produces optimal solution, DP may not
DP solves subproblems only once, greedy may not
Greedy uses recursion, DP does not
DP always produces suboptimal solution, greedy does not
Explanation - Dynamic programming stores results of subproblems to avoid recomputation, while greedy makes local optimal choices without such storage.
Correct answer is: DP solves subproblems only once, greedy may not

Q.4 Which of the following sorting algorithms has the best average-case time complexity?

Bubble Sort
Insertion Sort
Merge Sort
Selection Sort
Explanation - Merge Sort consistently has O(n log n) complexity on average, while the others have O(n^2) average-case complexity.
Correct answer is: Merge Sort

Q.5 What is the space complexity of a recursive implementation of factorial function?

O(1)
O(n)
O(log n)
O(n^2)
Explanation - Recursive calls are stored on the call stack, and for n calls, the space required is O(n).
Correct answer is: O(n)

Q.6 In which scenario is a linear search preferable over binary search?

Array is sorted
Array is unsorted
Array is very large and sorted
Array is in decreasing order
Explanation - Binary search requires a sorted array. For unsorted arrays, linear search is necessary.
Correct answer is: Array is unsorted

Q.7 Which of the following is NOT a characteristic of dynamic programming?

Overlapping subproblems
Optimal substructure
Recursive computation
Randomized decision making
Explanation - Dynamic programming relies on overlapping subproblems and optimal substructure, not randomness.
Correct answer is: Randomized decision making

Q.8 Which data structure is mainly used in implementing depth-first search (DFS) on a graph?

Queue
Stack
Priority Queue
Linked List
Explanation - DFS uses a stack (explicit or implicit via recursion) to explore vertices deep before backtracking.
Correct answer is: Stack

Q.9 What is the best-case time complexity of insertion sort?

O(n^2)
O(n log n)
O(n)
O(log n)
Explanation - Best-case occurs when the array is already sorted; only one comparison per element is needed.
Correct answer is: O(n)

Q.10 Which of the following problems is solved using the greedy algorithm?

0/1 Knapsack
Fractional Knapsack
Longest Common Subsequence
Matrix Chain Multiplication
Explanation - Fractional Knapsack can be solved greedily by taking items with the highest value/weight ratio first.
Correct answer is: Fractional Knapsack

Q.11 Which of the following is a stable sorting algorithm?

Quick Sort
Heap Sort
Merge Sort
Selection Sort
Explanation - Merge Sort maintains the relative order of equal elements, making it stable.
Correct answer is: Merge Sort

Q.12 What is the time complexity of finding the maximum element in an unsorted array of size n?

O(n)
O(log n)
O(n log n)
O(1)
Explanation - Every element must be checked in the worst case, giving O(n) time complexity.
Correct answer is: O(n)

Q.13 Which algorithm design technique uses the principle of making choices that are locally optimal?

Dynamic Programming
Divide and Conquer
Greedy
Backtracking
Explanation - Greedy algorithms make local optimal choices at each step to achieve a global solution.
Correct answer is: Greedy

Q.14 Which of the following is the characteristic of a problem suitable for dynamic programming?

Non-overlapping subproblems
Optimal substructure and overlapping subproblems
Requires randomness
No recursive structure
Explanation - Dynamic programming works when a problem can be broken into overlapping subproblems with optimal substructure.
Correct answer is: Optimal substructure and overlapping subproblems

Q.15 Which traversal method is used in breadth-first search (BFS)?

Preorder
Inorder
Queue-based level order
Stack-based
Explanation - BFS explores neighbors level by level, implemented using a queue.
Correct answer is: Queue-based level order

Q.16 What is the worst-case time complexity of quicksort?

O(n^2)
O(n log n)
O(n)
O(log n)
Explanation - Worst-case occurs when the pivot divides the array very unevenly, e.g., already sorted arrays.
Correct answer is: O(n^2)

Q.17 Which technique is used to optimize recursive solutions that have overlapping subproblems?

Backtracking
Memoization
Greedy
Brute Force
Explanation - Memoization stores results of subproblems to avoid redundant calculations in recursion.
Correct answer is: Memoization

Q.18 Which of the following is an NP-complete problem?

Bubble Sort
Dijkstra's Algorithm
Travelling Salesman Problem
Merge Sort
Explanation - TSP is an NP-complete problem because finding an optimal solution is computationally hard.
Correct answer is: Travelling Salesman Problem

Q.19 In which algorithm does backtracking systematically search for a solution by trying and abandoning paths?

Greedy
Dynamic Programming
Divide and Conquer
Backtracking
Explanation - Backtracking explores all possibilities recursively and abandons paths that fail constraints.
Correct answer is: Backtracking

Q.20 Which data structure is mainly used in breadth-first search (BFS)?

Stack
Queue
Heap
Graph
Explanation - BFS uses a queue to process nodes level by level.
Correct answer is: Queue

Q.21 Which algorithm can be used to detect a cycle in a directed graph?

DFS
BFS
Topological Sorting
Kruskal's Algorithm
Explanation - DFS can detect cycles by checking for back edges during recursion.
Correct answer is: DFS

Q.22 What is the main advantage of using a hash table?

Fast insertion, deletion, and search in average case
Sorted storage
Low memory usage
Guaranteed O(1) worst-case access
Explanation - Hash tables provide average O(1) time complexity for insertion, deletion, and search operations.
Correct answer is: Fast insertion, deletion, and search in average case

Q.23 Which of the following is true about recursive algorithms?

They always use more memory than iterative algorithms
They break problems into smaller instances
They never call themselves
They cannot be converted to iterative form
Explanation - Recursive algorithms solve a problem by solving smaller instances of the same problem.
Correct answer is: They break problems into smaller instances

Q.24 Which of the following is not a divide-and-conquer algorithm?

Merge Sort
Quick Sort
Binary Search
Fibonacci using recursion without memoization
Explanation - Fibonacci recursion without memoization does not divide and conquer efficiently; it repeats subproblems.
Correct answer is: Fibonacci using recursion without memoization

Q.25 Which of the following statements about the time complexity of algorithms is true?

O(n) < O(log n)
O(n^2) < O(n log n)
O(log n) < O(n)
O(n log n) < O(log n)
Explanation - Logarithmic complexity grows slower than linear complexity, so O(log n) < O(n).
Correct answer is: O(log n) < O(n)