surreal sound experiences

using multiple devices in a physical space to create immersive, spatially-distributed audio experiences that would be impossible with a single speaker. the core premise: every phone, speaker, and computer in a room is an independent audio emitter, and if you synchronize them, you can do spatial audio without expensive hardware — sound that appears to move across the room, audio that layers differently depending on where you're standing, experiences designed to feel surreal or disorienting in an interesting way. it's less about hi-fi playback and more about using ubiquitous connected speakers as a canvas.

the engineering challenge is latency. getting multiple devices to play audio synchronously over WiFi/Bluetooth is hard; even a few milliseconds of offset creates phase issues that make the spatial illusion collapse. solutions exist — Sonos does multi-room sync, and protocols like PTP (Precision Time Protocol) can achieve sub-millisecond sync over local networks — but consumer-accessible, open tooling for this is limited. the interesting design space is what experiences you'd build if sync were solved: a horror experience where whispers seem to surround you using everyone's phones, a music venue where different instruments play from different corners, or a guided meditation where sound moves with deliberate spatial choreography.

related: keystroke music, sensor capturer, agent-based simulation, acoustic drone detection, invoking thoughts

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?