Model Evaluation and Validation

Assess your grasp of metrics,cross-validation,and hyperparameter tuning.

1. Which metric is most appropriate for evaluating classification models on imbalanced datasets?
2. Which of the following are cross-validation techniques?
3. Overfitting occurs when a model performs well on training data but poorly on unseen test data.
4. What does AUC stand for in the context of ROC-AUC?
5. What is the primary purpose of a validation set?
6. Which metrics are suitable for evaluating regression models?
7. Stratified k-fold cross-validation ensures each fold has a similar class distribution to the original dataset.
8. Name the cross-validation technique where the model is trained on all data except one sample and tested on that single sample, repeated for all samples.
9. Which metric is defined as the ratio of true positives to the sum of true positives and false positives?
10. Which of the following indicate that a model is overfitting?
11. Bias refers to the error introduced by the model's simplifying assumptions about the data.
12. What term describes the process of adjusting hyperparameters to optimize model performance on the validation set?
13. Which cross-validation technique is most computationally expensive for large datasets?
14. Which of the following are classification metrics?
15. A model with high variance is likely to underfit the training data.
16. In k-fold cross-validation, what does the 'k' represent?
17. Which metric measures the proportion of actual positive cases correctly identified by the model?
18. Which techniques help reduce overfitting?
19. The test set should be used to tune hyperparameters during model development.
20. Name the matrix that summarizes true positives, false positives, true negatives, and false negatives for a classification model.
Answered 0 of 0 — 0 correct