How to use the noisy sensor input for expressive musical effect? And how can Elixir and Nerves help with it? I explore these questions while experimenting with DIY audio controllers built with Arduino boards, Raspberry Pi running Nerves and Pure Data. I use prototypes, not finished devices, and this is part of the idea, as my focus is on process and discovery rather than a polished product. The problems I aim to solve with code are filtering unstable readings, deciding which input changes are meaningful enough to become musical control, shaping value ranges and timing, and keeping output expressive without turning it into chaos. The Elixir/OTP model, with its isolated processes for input handling and resilience in long-running systems, seems to be a very good choice for handling such issues. I am also planning to include a live demo of the system in my talk. All that for the goal of treating raw sensor noise as a technical problem while keeping noisy sound as a creative choice. My talk may be useful for people building other systems based on noisy physical input, especially where signal quality and expressiveness matter at the same time.
Full-stack Developer at Curiosum, working on web applications in React and Elixir. PhD, Institute of Audiovisual Arts, Jagiellonian University, Kraków. Outside of her professional work, she has always been interested in combining technology and the arts/humanities, with a recent focus on hardware hacking and DIY electronics.