Neural Network Architecture
Neural networks are computing systems inspired by biological neural networks. They consist of interconnected nodes (neurons) organized in layers that transform input data through weighted connections and activation functions.

In civil engineering, neural networks can model complex relationships for: • Structural response: Predicting building deflection from load, material properties, and geometry
Material behavior: Estimating concrete strength from mix proportions, curing time, and test conditions
Traffic patterns: Forecasting congestion from weather, time, and historical flow data
Environmental monitoring: Analyzing air quality from multiple sensor readings

This demo lets you explore how different architectures and activation functions affect information flow through the network.
How to Use This Demo:
• Adjust the input values using the sliders or enter custom values
Click on any node to view and modify its incoming weights
• Use Add Layer and Remove Layer buttons to change network architecture
• Change the number of units in each hidden layer using the controls
• Select different activation functions for each layer to see their effects
• Watch how values propagate through the network in real-time
• Use "Generate Random Inputs" to test with different input combinations

Interactive Features:
• Weight editing: Click any neuron to modify its incoming weights
• Architecture modification: Add/remove layers dynamically
• Layer customization: Change neuron counts and activation functions
Layer Types:
Input Layer (Blue): 3 input values that you control
Hidden Layers (Purple): Transform inputs using weights and activation functions
Output Layer (Green): Single output value representing the network's prediction

Visual Elements:
Node background intensity represents value magnitudes (brighter = larger values)
Node colors distinguish positive (cooler hues) vs negative (warmer hues) values
Node borders are green for positive, red for negative values
Text color automatically adjusts for readability (white on dark backgrounds)
Pre-activation labels are green for positive, red for negative values
Connection thickness represents weight magnitudes
Values display shows both pre-activation → post-activation for each neuron
Mathematical Foundation:
Each neuron computes a weighted sum of its inputs, then applies an activation function:

zi=jwijxj+bi ai=f(zi)

Where zi is the pre-activation, ai is the post-activation output, wij are weights, and f() is the activation function.

Activation Functions:
Linear: f(x)=x (preserves input)
ReLU: f(x)=max(0,x) (introduces non-linearity while remaining simple)
Sigmoid: f(x)=11+ex (squashes to [0,1] range)
Tanh: f(x)=tanh(x) (squashes to [-1,1] range, zero-centered)

Non-linear activations enable the network to learn complex patterns.
Architecture Guidelines:
• Start with 1-2 hidden layers for most problems
• Use 10-100 neurons per layer as a starting point
• Deeper networks can model more complex relationships but may be harder to train
• Width vs depth trade-offs: wider layers vs more layers

Activation Function Selection:
• ReLU is most common for hidden layers (fast computation, avoids vanishing gradients)
• Sigmoid for binary classification output layers
• Linear for regression output layers
• Tanh sometimes better than sigmoid in hidden layers (zero-centered)

Practical Considerations:
• Observe how different architectures affect the output for the same inputs
• Notice how activation functions shape the transformation at each layer
• Experiment with weight values to understand their impact
• Real networks require training data and optimization algorithms
Input
1.00
0.50
-1.00
Hidden
-0.11
-1.62
-0.17
1.07
Hidden
0.27
-1.55
1.15
Output
-1.13
0.24