21 August 2025

Spiky Piano

Decorative logo for the Spiky Piano project

Spiky Piano transforms neuromorphic vision into music by triggering piano notes from event-based motion. Recognised with the Track 3 Outstanding Participation Award and a USD 500 prize at the SpikeCV-WUJI Challenge, IJCAI 2025, the demo highlights the creative and scientific promise of neuromorphic computing.

Using event-based inputs and SpikeCV integration, falling objects trigger piano notes in real time. This playful yet technically robust demo bridges spiking vision, interactivity and sound, showcasing neuromorphic computing’s potential for entertainment, robotics, and scientific applications. The demo earned international recognition, winning the Outstanding Participation Award and a USD 500 prize at the SpikeCV-WUJI Challenge, IJCAI 2025, one of the world’s premier AI conferences. The system captures asynchronous pixel changes with an event camera, producing spike streams whenever motion occurs. These spikes are filtered for noise and rendered into binary frames. Connected-components analysis identifies motion clusters and bounding boxes. A line-crossing rule then triggers piano notes, mapping small, medium, and large objects to different tones. To extend reproducibility, we integrated SpikeCV. Metavision’s .raw recordings were converted into SpikeCV-compatible .dat files and loaded via the SpikeStream API. Filters such as temporal persistence, morphological closing, and downsampling stabilised detections. Audio synthesis and post-hoc muxing produced MP4 recordings with aligned sound and visuals.

Beyond entertainment, the underlying algorithm has broad applications: industrial sorting and counting on high-speed conveyors; obstacle recognition in autonomous robots and drones; real-time traffic monitoring and smart city analytics; sports technology for rapid event detection; and microscopy experiments with precise event triggers. These use cases demonstrate the scientific and commercial promise of neuromorphic vision. We thank the Metavision dataset providers and the SpikeCV project team for enabling reproducible experimentation. We also acknowledge the collaborative efforts of Muhammad Aitsam, Syed Saad Hassan, and Dr Alejandro Jiménez Rodríguez.