JuniorMultiple choiceWhat is the p-value in hypothesis testing?AA measure of the probability of a null hypothesis being true.BA parameter used to calculate confidence intervals.CA value that indicates the strength of a correlation.DA metric that determines the statistical significance of an observed effect.Check answer
JuniorMultiple choiceWhat is the Central Limit Theorem (CLT)?AA theorem that states the distribution of sample means approximates a normal distribution as the sample size becomes large.BA principle that explains the relationship between two random variables.CA method for calculating the variance of a dataset.DA hypothesis that predicts the behavior of a single sample.Check answer
JuniorMultiple choiceWhat is the difference between Type I and Type II errors?AType I error is a false positive, while Type II error is a false negative.BType I error occurs when a true null hypothesis is accepted, and Type II error occurs when a false null hypothesis is rejected.CType I error and Type II error are the same and interchangeable.DType I error is a false negative, while Type II error is a false positive.Check answer
Mid-levelMultiple choiceDifferentiate between supervised, unsupervised, and reinforcement learning.AAll three learning types use the same data and methods.BSupervised learning uses unlabeled data, unsupervised learning uses labeled data, and reinforcement learning uses rewards and penalties.CSupervised learning uses labeled data, unsupervised learning uses unlabeled data, and reinforcement learning uses rewards and penalties.DSupervised learning uses rewards and penalties, unsupervised learning uses labeled data, and reinforcement learning uses unlabeled data.Check answer
Mid-levelMultiple choiceWhat is overfitting and how can it be prevented?AOverfitting occurs when a model is too simple; it can be prevented by adding more features.BOverfitting is when a model performs poorly on training data; it can be prevented by increasing the dataset size.COverfitting occurs when a model learns irrelevant details from training data; it can be prevented using cross-validation and regularization.DOverfitting is when a model performs well on new data; it can be prevented by using more complex models.Check answer