Sound Clouds

A large-scale ambient AI interactive installation where people move giant inflatables to generate playful visuals and soundscapes. It turns curiosity into joyful community play and meaning-making, pointing toward possible futures in adaptive, responsive environments. Over 200 participants engaged with Sound Clouds at a public exhibition in Atlanta. Papers accepted for CHI and NeurIPS.

AI

UX

UI

TEI

Ongoing

Installation

Publication

Overview

Sound Clouds is a large-scale interactive installation that explores how ambient AI systems can foster awe, collective play, and embodied social interaction. Using computer vision, embedded sensors, and ambisonic sound, the installation transforms floating inflatables into tactile, responsive interfaces that react to participants’ movements through light and dynamic spatial audio. As visitors push, lift, or follow the inflatables, they gradually build “mental models” of how the system behaves—shifting from analytical sensemaking to playful, embodied co-creation. The installation encourages curiosity, sensory attunement, and emergent social interaction without explicit instructions.

Over 200 participants at a public Atlanta exhibition engaged with Sound Clouds. Many described feelings of “childlike wonder”, immersion, and emotional openness. Beyond individual interaction, participants began collaborating, co-lifting the inflatables, and shaping the sound environment together. Sound Clouds showcases how ambient intelligence can invite relational and interpretive engagement rather than closed-loop efficiency.

Publications

Exhibitions

Paper accepted to CHI 2026

Accepted for NeurIPS Creative AI Track 2025 

C&C Pictorial is currently in the works

Goat Farm Arts Center, Atlanta (May 2025)
Digital Media Demo Day, Georgia Tech (April 2025)
Night of Ideas, Goat Farm, Atlanta( March 2025)

Project Details

My Role:  Technical Lead, Interactive System Engineer. I was also part of the Infrastructure Design, Music, and Evaluation Teams


Technologies / Methods:   YOLO12n (CV) •Max/MSP • OSC/MIDI • ESP32 Microcontrollers • Accelerometers • Spatial Audio (Ambisonics) • Embedded Sensor Networks • Arduino • Python • User Interviews • Ethnographic Observations • Material Research


Team:  Expressive Machinery Lab at Georgia Tech—Brian Magerko (PI), TeAiris Majors, Jasmine Kaur, Jisu Kim, Daksh Kapoor, Matias Arturo Cevallos, William Boylan, Chengzhi Zhang, Hyunkyung Shin, Xiaoran Bai, Zihan Zhang, Jiahe Qian


Timeline: Jan 2025 - Ongoing


Funded by Catalyst Arts Grant

The Problem

Ambient Intelligence (AmI) has traditionally been designed for pragmatic outcomes—energy efficiency (Nest), behavioral nudging (ambient stair prompts), or surveillance (public security systems). These systems embody the principles of being Sensitive, Responsive, Adaptive, Transparent, Ubiquitous, and Intelligent, yet their interaction models often center on optimization, control, or automation.

What remains unexplored is a more human-centered, embodied, and experiential form of ambient intelligence—one that is not just responsive but meaningfully relational. We asked:

How can ambient artificial intelligence in public spaces be designed to evoke awe, wonder, and beauty—not just functionality?

More specifically, how might interactive environments invite curiosity, embodied discovery, collective play, and emergent meaning-making, rather than predictable or instruction-driven interactions?

The Solution

Sound Clouds is a large-scale ambient intelligence installation made of helium-filled inflatables (8–12 feet in diameter) that hover at near-neutral buoyancy. Participants physically move, push, lift, and follow the spheres, which translate motion and proximity into real-time music, spatial audio, and light behaviors.

There are no instructions. Interaction is guided only by curiosity, material behavior, and embodied exploration.

Instead of optimizing human behavior, Sound Clouds invites improvisation, sensory attunement, and collective interpretation—treating ambient AI as a creative and relational medium. Through motion, rhythm, and shared movement, participants collaboratively discover how the system behaves, slowly building their own mental models of the installation.

The installation demonstrates how ambient intelligence can foster awe, social connection, and emergent meaning-making.


Design Approach

Sound Clouds uses floating PVC spheres, each between 1.2m and 3m in diameter. 

A top-down GoPro camera captured live footage of the space and streamed it via Wi-Fi to a central Mac computer. Using a fine-tuned YOLO12n model, the system detected the spheres’ location and estimated their height (z-position) based on diameter. This information was then routed through ESP-NOW and OSC to control the LEDs embedded in the spheres. The location of the inflatables generated live music using Max/MSP, pairing visual responsiveness with spatialized audio.

Sound generation was implemented in Max/MSP using a rule-based system and an ocean-like ambient background. Three key interaction scripts shaped the sound experience:

  • cloudGrid — a sphere entering a new zone triggers a distinct sound


  • orbProxi — two spheres moving close to each other activate a sound


  • pitchShift — a sphere’s height alters the pitch of the ambient background


An ocean-like ambient audio layer underscored the ethereal, atmospheric quality of the installation. 

Sound Clouds was developed through three exhibition-based iterations

The first was during Atlanta’s Night Of Ideas event. This exhibition made use of the warehouse that would be used to showcase the final iteration. The second version of the project was during Georgia Tech’s Digital Media Demo Day. These iterations were focused on testing technical systems and inflatable behavior.

In our final public deployment, the installation was hosted inside a renovated industrial warehouse (18m wide × 42m long × 9m tall). We transformed the space using dry ice, fabric canopies, and moving light projections to create an underwater-like atmosphere. Up to 20 people could interact with the spheres at the same time.

The final public exhibition functioned as both deployment and research study. It involved more than 200 participants. Our team conducted user observations and post-interaction interviews to better understand how ambient AI could evoke curiosity, awe, and collaborative meaning-making. I was part of the interview team, asking how participants played with the system, formed mental models, and gradually transitioned from individual exploration to emergent social co-creation.

Key Outcomes

Sound Clouds serves of an explorations of how ambient AI can evoke awe, wonder, and emotional connection. Participants naturally shifted from individual curiosity to collaborative play, forming shared interpretations (mental models) of how the system behaved. Interviews revealed strong emotional responses, often described as “calming,” “dreamlike,” and a return to “childlike wonder.”

Impact

The installation challenges conventional uses of ambient intelligence by positioning it as a medium for collective meaning-making, sensory attunement, and social connection. Instead of optimizing or guiding behavior, Sound Clouds enabled spontaneous cooperation, relational engagement, and shared sensemaking. The project seeks to inspire new possibilities for designing ambient AI as public, participatory, and emotionally resonant.


Reflection and Futures

Sound Clouds reframes the design of ambient intelligence beyond a technical system to an experiential and social practice. It opens future directions for designing responsive environments that cultivate social affect, public imagination, and shared reflection. In doing so, it provokes new design possibilities for AI as a medium to shape atmosphere, belonging, and affective human–AI encounters.

Let’s Create Together

I love working with interdisciplinary and curious people. Whether you want to collaborate, brainstorm, or exchange ideas, I’d love to connect.

Let’s Create Together

I love working with interdisciplinary and curious people. Whether you want to collaborate, brainstorm, or exchange ideas, I’d love to connect.

Let’s Create Together

I love working with interdisciplinary and curious people. Whether you want to collaborate, brainstorm, or exchange ideas, I’d love to connect.