ML Experiment

🧠 Neural Network Playground

Watch a neural network learn in real time

Explore how neural networks classify data. Select a preset, then hit Reset & Watch it Learn to see the network train from random weights using backpropagation. Click on the decision boundary to add your own data points (shift-click for class 0).

XOR Simple (2-2-1)Circle Classifier (2-4-1)Spiral Deep (2-6-4-1)Moons Classifier (2-4-2-1)Checkerboard Classifier (2-8-4-1)Gaussian Clusters Classifier (2-6-4-1)Donut Classifier (2-4-1)

Activation:

SigmoidReLUtanh

Network Architecture

Show Gradients
InputHidden 1Output

Decision Boundary

Draw Mode

Speed

10

LR

Steps:

0

50%

Accuracy on XOR

9

Parameters

3

Layers

0

Training Steps

Idle

Status


How it works

A neural network is a series of layers of interconnected neurons. Each connection has a weight that determines how strongly one neuron influences another. During a forward pass, input values are multiplied by weights, summed with a bias, and passed through a sigmoid activation function to produce outputs between 0 and 1.

During training, the network learns by adjusting its weights through backpropagation. For each data point, the network computes its prediction (forward pass), then calculates how wrong it was using binary cross-entropy loss. The error signal propagates backward through the network, computing gradients — how much each weight contributed to the error. Weights are then nudged in the direction that reduces the loss using stochastic gradient descent.

The decision boundary visualization shows how the network partitions the 2D input space. Blue regions indicate outputs below 0.5 (class 0), while red regions indicate outputs above 0.5 (class 1). Watch how the boundary evolves from random noise into clean separation as the network trains. The learning rate controls how big each weight update is — too high and the network overshoots, too low and it learns slowly.

In the network diagram, green connections represent positive weights (excitatory) and red connections represent negative weights (inhibitory). Thicker lines indicate larger weight magnitudes. During training, connections glow to show the network is actively learning. Try adding your own data points by clicking on the decision boundary and watch the network adapt.


© 2026 Adam Hultman