Instructions:
• Adjust the input values using the sliders or enter custom values
Click on any node to view and modify its incoming weights
• Use Add Layer and Remove Layer buttons to change network architecture
• Change the number of units in each hidden layer using the controls
• Select different activation functions for each layer to see their effects
• Watch how values propagate through the network in real-time
• Use "Generate Random Inputs" to test with different input combinations
• Observe both pre-activation values (small labels) and post-activation values (in nodes)

Layer Types:
Input Layer (Blue): 3 input values that you control
Hidden Layers (Purple): Transform inputs using weights and activation functions
Output Layer (Green): Single output value representing the network's prediction

Activation Functions:
Linear: f(x) = x (no transformation)
ReLU: f(x) = max(0, x) (clips negative values to 0)
Sigmoid: f(x) = 1/(1 + e^(-x)) (outputs between 0 and 1)
Tanh: f(x) = tanh(x) (outputs between -1 and 1)

Network Visualization:
Node background intensity represents value magnitudes (brighter = larger values)
Node colors distinguish positive (cooler hues) vs negative (warmer hues) values
Node borders are green for positive, red for negative values
Text color automatically adjusts for readability (white on dark backgrounds)
Pre-activation labels are green for positive, red for negative values
Connection thickness represents weight magnitudes
Values display shows both pre-activation → post-activation for each neuron
Input
1.00
0.50
-1.00
Hidden
1.43
-1.86
-1.07
-0.29
Hidden
-0.66
0.36
0.36
Output
-0.07
0.48