Q.1 What does a probabilistic model represent?
Deterministic rules
Relationships with uncertainty
Exact outcomes only
Pure logic reasoning
Explanation - Probabilistic models capture uncertain relationships between variables instead of deterministic mappings.
Correct answer is: Relationships with uncertainty
Q.2 Which law is fundamental to probability theory?
Newton’s Law
Bayes’ Theorem
Ohm’s Law
Moore’s Law
Explanation - Bayes’ Theorem is fundamental in probability theory and inference, connecting prior, likelihood, and posterior probabilities.
Correct answer is: Bayes’ Theorem
Q.3 What type of graph structure is used in Bayesian Networks?
Cyclic Graph
Directed Acyclic Graph
Undirected Graph
Flowchart
Explanation - Bayesian Networks are represented using Directed Acyclic Graphs (DAGs) to show probabilistic dependencies.
Correct answer is: Directed Acyclic Graph
Q.4 In probabilistic inference, what is the 'posterior'?
Prior knowledge
Updated belief after evidence
Likelihood function
Evidence probability
Explanation - The posterior is the updated probability distribution after considering the evidence using Bayes’ Theorem.
Correct answer is: Updated belief after evidence
Q.5 Which distribution models the probability of a fixed number of successes in repeated Bernoulli trials?
Normal distribution
Poisson distribution
Binomial distribution
Exponential distribution
Explanation - The Binomial distribution models the probability of successes in a fixed number of independent Bernoulli trials.
Correct answer is: Binomial distribution
Q.6 What is the main goal of inference in probabilistic models?
Find deterministic rules
Predict missing or hidden variables
Eliminate uncertainty completely
Build neural networks
Explanation - Probabilistic inference estimates unknown variables or predictions given observed data and uncertainty.
Correct answer is: Predict missing or hidden variables
Q.7 Which of the following is NOT a probabilistic model?
Bayesian Network
Markov Chain
Decision Tree
Hidden Markov Model
Explanation - Decision Trees are deterministic classifiers, while the others are probabilistic models.
Correct answer is: Decision Tree
Q.8 What does the Markov property state?
Future depends only on present
Future depends on entire history
Future is independent of present
Future is deterministic
Explanation - The Markov property states that the next state depends only on the current state, not past states.
Correct answer is: Future depends only on present
Q.9 Which method approximates inference when exact inference is intractable?
Gradient descent
Monte Carlo methods
Linear regression
Backpropagation
Explanation - Monte Carlo sampling methods approximate probabilistic inference when exact inference is computationally hard.
Correct answer is: Monte Carlo methods
Q.10 Which of these is an example of a latent variable model?
Linear Regression
Gaussian Mixture Model
Decision Tree
K-Nearest Neighbors
Explanation - Gaussian Mixture Models assume hidden (latent) variables representing the component distributions.
Correct answer is: Gaussian Mixture Model
Q.11 In Bayesian inference, what is the prior?
Observed evidence
Initial belief before evidence
Updated probability
Likelihood ratio
Explanation - The prior represents our belief about parameters before considering new evidence.
Correct answer is: Initial belief before evidence
Q.12 What is the key assumption of Naïve Bayes classifier?
Features are correlated
Features are independent
Data is continuous
Model has no priors
Explanation - Naïve Bayes assumes that features are conditionally independent given the class label.
Correct answer is: Features are independent
Q.13 Which distribution is used to model the number of events in a fixed interval?
Poisson distribution
Binomial distribution
Normal distribution
Uniform distribution
Explanation - The Poisson distribution models the probability of a number of events occurring in a fixed interval of time or space.
Correct answer is: Poisson distribution
Q.14 What is the joint probability of two independent events A and B?
P(A) + P(B)
P(A) × P(B)
P(A|B)
P(B|A)
Explanation - For independent events, the joint probability is the product of individual probabilities.
Correct answer is: P(A) × P(B)
Q.15 What does Maximum Likelihood Estimation (MLE) aim to maximize?
Prior probability
Likelihood of data
Posterior probability
Marginal probability
Explanation - MLE finds parameter values that maximize the likelihood of the observed data.
Correct answer is: Likelihood of data
Q.16 Which algorithm is commonly used for inference in Hidden Markov Models?
Dijkstra’s algorithm
Viterbi algorithm
Gradient descent
K-means
Explanation - The Viterbi algorithm is used to find the most probable sequence of hidden states in HMMs.
Correct answer is: Viterbi algorithm
Q.17 What does the conditional probability P(A|B) represent?
Probability of A given B
Probability of B given A
Joint probability of A and B
Independent probability of A
Explanation - Conditional probability P(A|B) means the probability of A happening given that B has occurred.
Correct answer is: Probability of A given B
Q.18 Which technique is used to approximate posterior distributions in Bayesian inference?
K-means clustering
Variational Inference
Linear regression
Decision Trees
Explanation - Variational Inference approximates posterior distributions by optimizing a simpler family of distributions.
Correct answer is: Variational Inference
Q.19 In probability, what is normalization?
Ensuring probabilities sum to 1
Removing noise from data
Scaling variables
Adjusting priors
Explanation - Normalization ensures that the sum of probabilities across all possible outcomes equals 1.
Correct answer is: Ensuring probabilities sum to 1
Q.20 What does the law of total probability help compute?
Posterior probability
Marginal probability
Likelihood
Prior probability
Explanation - The law of total probability expands marginal probabilities in terms of conditional probabilities.
Correct answer is: Marginal probability
Q.21 What is a conjugate prior?
A prior that cancels evidence
A prior that leads to posterior of same family
A prior with zero probability
A non-Bayesian prior
Explanation - Conjugate priors ensure that the posterior distribution is in the same family as the prior, simplifying inference.
Correct answer is: A prior that leads to posterior of same family
Q.22 Which algorithm is used for approximate inference in Bayesian networks?
Backpropagation
Belief Propagation
Linear Regression
Support Vector Machine
Explanation - Belief propagation (message passing) is used for approximate inference in probabilistic graphical models.
Correct answer is: Belief Propagation
Q.23 What is the difference between prior and posterior in Bayesian inference?
Prior is after evidence, posterior is before
Prior is before evidence, posterior is after
Both are before evidence
Both are after evidence
Explanation - Prior reflects initial belief, while posterior is the updated belief after observing data.
Correct answer is: Prior is before evidence, posterior is after
Q.24 What kind of variables do probability distributions describe?
Deterministic constants
Random variables
Fixed parameters
Geometric figures
Explanation - Probability distributions describe the likelihood of outcomes of random variables.
Correct answer is: Random variables
Q.25 Which distribution approximates the Binomial distribution when n is large and p is small?
Normal distribution
Poisson distribution
Exponential distribution
Uniform distribution
Explanation - For large n and small p, the Binomial distribution converges to a Poisson distribution.
Correct answer is: Poisson distribution
