Neural Network Architecture
Neural networks are computing systems inspired by biological neural networks. They consist of interconnected nodes (neurons) organized in layers that transform input data through weighted connections and activation functions.

This demo lets you explore how different architectures and activation functions affect information flow through the network.
How to Use This Demo:
• Adjust the input values using the sliders or enter custom values
Click on any node to view and modify its incoming weights
• Use Add Layer and Remove Layer buttons to change network architecture
• Change the number of units in each hidden layer using the controls
• Select different activation functions for each layer to see their effects
• Watch how values propagate through the network in real-time
• Use "Generate Random Inputs" to test with different input combinations

Interactive Features:
• Weight editing: Click any neuron to modify its incoming weights
• Architecture modification: Add/remove layers dynamically
• Layer customization: Change neuron counts and activation functions
Layer Types:
Input Layer: 3 input values that you control
Hidden Layers: Transform inputs using weights and activation functions
Output Layer: Single output value representing the network's prediction

Visual Elements:
Node borders are green for positive, red for negative values
Connection color represents weight magnitudes
Architecture Guidelines:
• Start with 1-2 hidden layers for most problems
• Use 10-100 neurons per layer as a starting point
• Deeper networks can model more complex relationships but may be harder to train
• Width vs depth trade-offs: wider layers vs more layers

Activation Function Selection:
• ReLU is most common for hidden layers (fast computation, avoids vanishing gradients)
• Sigmoid for binary classification output layers
• Linear for regression output layers
• Tanh sometimes better than sigmoid in hidden layers (zero-centered)

Practical Considerations:
• Observe how different architectures affect the output for the same inputs
• Notice how activation functions shape the transformation at each layer
• Experiment with weight values to understand their impact
• Real networks require training data and optimization algorithms
1.0
0.5
-1.0
Input
1.00
0.50
-1.00
Hidden 1
0.57
0.00
0.32
0.13
Hidden 2
0.00
0.16
1.17
Output
0.87

Network Architecture

Hidden Layer 1
Hidden Layer 2
Output Layer
Weights & Biases
Negative 0 Positive