2 free questions to get you started. Unlock all sections (Normal, Code & Logic) with Pro.
Bias is error from a model being too simple (underfitting); it doesn’t capture the true pattern. Variance is error from the model being too sensitive to the training set (overfitting). High bias → underfitting; high variance → overfitting. The goal is to find the sweet spot (e.g. via cross-validation, regularization, or tuning model complexity).
Precision: Of all predicted positives, how many are correct? (Minimize false positives.) Recall: Of all actual positives, how many did we find? (Minimize false negatives.) Use precision when false positives are costly (e.g. spam marking good email). Use recall when missing a positive is costly (e.g. disease screening). F1 balances both.
40+ questions across ML fundamentals, stats, coding, and case studies — with full answers.
Upgrade to ProUnlock with Pro for the answer.
Unlock with Pro for the code.
Unlock with Pro for the answer.
Unlock with Pro for the answer.
Unlock with Pro for the code.
Unlock with Pro for the answer.
Unlock with Pro for the answer.
Unlock with Pro for the code.
Unlock with Pro for the answer.
Unlock with Pro for the answer.
Unlock with Pro for the answer.
Unlock with Pro for the answer.