PythonTutorials.net
Toggle Menu
Home
Online Python Compiler
Tutorials
Django
Flask
Scikit-Learn
NumPy
NLTK
Pillow
Blog
All Posts
Model Evaluation and Validation
Assess your grasp of metrics,cross-validation,and hyperparameter tuning.
1. Which metric is most appropriate for evaluating classification models on imbalanced datasets?
Accuracy
F1-Score
Precision
Specificity
2. Which of the following are cross-validation techniques?
k-fold cross-validation
Train-test split
Stratified k-fold cross-validation
Leave-one-out cross-validation
3. Overfitting occurs when a model performs well on training data but poorly on unseen test data.
True
False
4. What does AUC stand for in the context of ROC-AUC?
5. What is the primary purpose of a validation set?
Train the model
Tune hyperparameters
Evaluate final model performance
Detect overfitting
6. Which metrics are suitable for evaluating regression models?
Mean Squared Error (MSE)
F1-Score
R-squared
Precision
7. Stratified k-fold cross-validation ensures each fold has a similar class distribution to the original dataset.
True
False
8. Name the cross-validation technique where the model is trained on all data except one sample and tested on that single sample, repeated for all samples.
9. Which metric is defined as the ratio of true positives to the sum of true positives and false positives?
Recall
Accuracy
Precision
Specificity
10. Which of the following indicate that a model is overfitting?
High training accuracy, low test accuracy
Low training accuracy, low test accuracy
High variance
High bias
11. Bias refers to the error introduced by the model's simplifying assumptions about the data.
True
False
12. What term describes the process of adjusting hyperparameters to optimize model performance on the validation set?
13. Which cross-validation technique is most computationally expensive for large datasets?
5-fold cross-validation
Stratified k-fold cross-validation
Leave-one-out cross-validation
Train-test split
14. Which of the following are classification metrics?
ROC-AUC
Mean Absolute Error (MAE)
F1-Score
R-squared
15. A model with high variance is likely to underfit the training data.
True
False
16. In k-fold cross-validation, what does the 'k' represent?
17. Which metric measures the proportion of actual positive cases correctly identified by the model?
Precision
Recall
Accuracy
Specificity
18. Which techniques help reduce overfitting?
Adding regularization (e.g., L1/L2)
Increasing model complexity
Using more training data
Decreasing the number of features
19. The test set should be used to tune hyperparameters during model development.
True
False
20. Name the matrix that summarizes true positives, false positives, true negatives, and false negatives for a classification model.
Reset
Answered 0 of 0 — 0 correct