Demo · In development · v0.1
Bayesian optimization,
with an AI copilot.
A Gaussian process learns an unknown 1D objective from observations. Click the canvas to sample. Switch modes to change how much autonomy the agent has.
click anywhere — the agent snaps you to its UCB suggestion
exploration · β
2.0
low β → exploit (sample near the mean max) · high β → explore (sample where we're uncertain)
budget · 2 / 16
each observation is “expensive” — a real simulation, a real experiment. fewer is better.
What you're looking at
The indigo line is the model's best guess at an unknown function. The wash around it is uncertainty — wider where the model hasn't seen data. The dashed vermillion line marks the next sample the agent would pick under upper-confidence-bound acquisition.
In practice, each sample is expensive — a high-fidelity FDTD simulation, or a fab run. The question is not “find the max” but “find it in as few samples as possible.” Manual mode tests your intuition. Guided mode tests whether you trust the agent's. Autonomous mode leaves the room.