A loss function is a way to measure how well a model's prediction matches the expected output.
It answers the question:
โHow far off was the model?โ
โ Why Does It Matter?
Training a model means minimizing error.
A loss function:
- Converts prediction accuracy into a number
- Guides how the model updates itself
- Drives the learning process
๐ข Example
Suppose a model predicts:
- House price: $250,000
But the true price is:
- $300,000
A simple loss = |300,000 - 250,000| = 50,000
The goal is to minimize this difference.
๐งช Common Loss Functions
Regression Problems
-
Mean Squared Error (MSE)
loss = average((y_true - y_pred)^2)
Punishes big mistakes more than small ones. -
Mean Absolute Error (MAE)
loss = average(|y_true - y_pred|)
Treats all errors equally.
Classification Problems
- Cross-Entropy Loss
Measures the gap between predicted probability and actual label.
Often used for models like image or text classifiers.
๐ How Is It Used?
- Model makes a prediction.
- Loss function calculates the error.
- The model adjusts its parameters to reduce loss.
- Repeat for many examples.
This is called gradient descent โ covered in the next lecture.
๐ง Summary
| Term | Meaning | |-----------------|----------------------------------------| | Loss Function | Measures model error | | MSE | Penalizes large errors (squared) | | MAE | Measures absolute differences | | Cross-Entropy | Used in classification tasks | | Optimization | Model tries to reduce loss over time |
โ Self-Check
- What is a loss function?
- What are the differences between MSE and MAE?
- Why is loss important during training?