Feedback for Group Assignment 2.6¶

CEGM1000 MUDE: Week 2.6, Friday, Dec 6th, 2024.

General feedback¶

Overall, questions were answered very well!

Question 1:¶

1.1: Validation tunes hyperparameters; explanation could be more detailed. Clarify roles of training, validation, and test sets (independence and purposes). Avoid misleading language about "tuning" the model on the test set. Ensure detail on generalization, parameter fitting, and unbiased evaluation.

1.2: Explanation is vague and could mention data leakage or avoiding bias. Provide a clearer statement about plateauing of validation loss and its implications.

1.3: Hyperparameter tuning isn't directly tied to data scaling; clarity needed. Statement about unseen data causing different scaling is vague.

Question 2:¶

2.1: Validation loss does not go back up; no evidence of overfitting or underfitting. Expand discussion on trends in loss curves. Avoid contradictory explanations (e.g., loss curves decreasing yet underfitting).

2.2: Lack of clarity on why the symmetry of the beam creates ambiguity. Explanation about using one sensor is missing or vague.

2.3: Missing assumption of Gaussian noise; lacks important details and depth.

Question 3:¶

3.1: Validation loss trends need to be explained more thoroughly. Avoid vague phrasing like "could be overfitting or neither."

3.2: Misses the importance of monitoring validation loss for trends. Needs more specific detail and analysis.

3.3 & 3.4: Misses mention of irreducible error. Explain the relationship between model complexity and performance.

Question 4:¶

4.1: Provide details about epochs, learning rates, and their impact on performance.

4.2: Avoid vague claims about model complexity. Clarify why smaller or simpler models might lead to underfitting.

4.3: Discuss complexity and layers more thoroughly. Correlate architecture choices with validation error trends.

End of file.

© Copyright 2024 MUDE, TU Delft. This work is licensed under a CC BY 4.0 License.