5/20
Loss Functions & Optimization (Adam, SGD) · Page 1 of 2

Loss Functions

Loss Functions & Optimization

Loss Functions (Cost Functions)

A loss function measures how wrong the model is. Different tasks need different losses.

Binary Classification — Binary Cross-Entropy

Loss = -[y × log(ŷ) + (1-y) × log(1-ŷ)]

Example:

  • True label: y = 1 (positive class)

  • Prediction: ŷ = 0.9 (high confidence correct)

  • Loss = -[1 × log(0.9) + 0 × ...] ≈ 0.105 ✓ (low)

  • True label: y = 1 (positive class)

  • Prediction: ŷ = 0.1 (low confidence, wrong)

  • Loss = -[1 × log(0.1)] ≈ 2.303 ✗ (high)

Interpretation: Loss penalizes wrong, confident predictions most.

Multi-class Classification — Categorical Cross-Entropy

Loss = -Σᵢ yᵢ × log(ŷᵢ)

Used after softmax for 3+ classes.

Regression — Mean Squared Error (MSE)

Loss = (1/n) Σ(y - ŷ)²

Example: Predicting house prices

  • True: $500k, Predicted: $510k, Error: (500-510)² = 100
  • True: $500k, Predicted: $400k, Error: (500-400)² = 10000 ← Much worse!

MSE penalizes large errors more (quadratic).

Why These Losses?

  • Cross-entropy: Designed for classification (probabilistic interpretation)
  • MSE: Designed for regression (continuous values)
  • Custom losses: Can design for specific tasks

Key insight: Choose loss that matches your task!

main.py
Loading...
OUTPUT
Click "Run Code" to execute…