High-quality audio is the backbone of any communication platform, but real-world environments are rarely quiet. To make matters worse, legacy telephony protocols often force low audio resolutions, creating a "perfect storm" of poor audio quality. In this talk, we’ll explore how we built a real-time noise reduction system for phone calls using the Membrane Framework and Nx. We will go "under the hood" of a live multimedia pipeline that bridges audio between callers while performing AI-powered enhancement on the fly. We will also detail the journey of rewriting our AI pre- and post-processing logic from Python into native Elixir. We’ll share the technical challenges we encountered, from optimizing tensor operations in Nx to synchronizing multiple live audio streams to maximize model inference performance.
Software Developer in Software Mansion, where I contribute to the Membrane Framework ecosystem, focusing on Membrane Core and Boombox. My experience includes implementing HLS and WebRTC in Elixir, as well as developing Unifex and Bundlex—tools for integrating Elixir with C and C++ code. Beyond my professional life, I study psychology, prepare for triathlon and learn spanish. Fan of spending time in mountains, Camino de Santiago and martial arts.
I am a Software Developer at Software Mansion, currently specializing in Machine Learning across the Elixir (Nx) and React Native ecosystems. Previously, I worked at Dashbit, where I developed Scholar, a library for traditional machine learning built on top of Nx. My technical passion lies in high-performance computing and optimizing low-level C++ code. Outside of programming, I enjoy arthouse cinema and film analysis, playing contract bridge, and traveling.