The full camera-based Pose Playground is being refreshed.
New version coming soon — click the graphic to read a short technical blog about how the pose system works.
Under the hood: Pose Playground tech notes
The Pose Playground experiment combines three main pieces of technology:
a browser webcam stream, a pose detection model, and a small 2D physics scene.
The original prototype used tf.js with MoveNet for pose detection
and Matter.js for the on-screen physics objects.
At 60 FPS, each frame starts with reading pixels from the webcam
into the pose detector.
The model returns a set of keypoints (nose, shoulders, elbows, wrists, hips,
knees, ankles) with confidences.
These keypoints are then normalized into the canvas coordinate system
and mapped onto a simplified skeleton.
For interaction, the code builds invisible colliders around specific keypoints:
for example, a small circle around the head is checked against a red balloon body,
and circles around the feet are checked against boxes and stars in the physics engine.
When a collision occurs, the physics body receives an impulse and the HUD updates
to show which part of the body triggered it.
On the rendering side, a canvas layer draws the skeleton and motion trails on top
of the webcam feed.
Trails are implemented as short-lived segments that fade over time,
giving flowing lines when you move your hands quickly.
A lightweight UI layer then displays status messages like
"camera ready" and "pose detected" based on the current detection state.
The upcoming refresh will keep this architecture but focus on performance
and privacy: clearer permission flows, an explicit "start session" button,
and options to run the model at reduced resolution for low-powered devices.
Until that ships, this page stays in "technical preview" mode instead of
loading camera and AI code by default.