Instructions:
• Use the numpad to select MNIST digits and see real CNN processing
• Click "Draw Your Own" to draw custom digits for classification
• Watch feature maps update as data flows through each layer
• Start at epoch 0 (random weights) and step through training
• Click "Train Epoch" to advance one training epoch at a time
• Observe how feature maps evolve as the network learns

CNN Architecture:
Input Layer: 28×28 grayscale digit images
Conv Layer 1: 4 filters (3×3) → ReLU activation
Max Pool 1: 2×2 pooling → reduces spatial size
Conv Layer 2: 8 filters (3×3) → ReLU activation
Max Pool 2: 2×2 pooling → further size reduction
Flatten: Convert 2D feature maps to 1D vector
Dense Layer: 128 neurons → ReLU activation
Output Layer: 10 neurons → Softmax (digit probabilities)

Training Process:
Real Model States: Each epoch uses actual trained weights
Loss Function: Cross-entropy loss for multi-class classification
Optimizer: Adam optimizer with learning rate 0.001
Performance: Watch accuracy improve from 10% to 95%+

Input Image

Prediction: -

CNN Architecture

Feature Maps at Different Layers

After Conv1 (4 filters)

After Pool1

After Conv2 (8 filters)

After Pool2

FC1 (128 units)

Output Probabilities

0: 0.00
1: 0.00
2: 0.00
3: 0.00
4: 0.00
5: 0.00
6: 0.00
7: 0.00
8: 0.00
9: 0.00
Epoch: 0/20
Loss: 2.303
Accuracy: 10.0%