I have always been fascinated by the idea to find a way to represent peoples’ lives through sound — or, to have kind of a “sound companion” that would create musical structures according to what I am doing, at a given time. It would know about my current mood, the situation I am in, the activity I go for. Something so custom and tailored, almost like my own personal soundtrack. And no, we’re not talking about playlists. We talk about adaptive music, and I’m currently running a project at Porsche Digital, dedicated to research that fascinating topic and its opportunities.
Adaptive music, to some maybe better known as interactive music, is a pretty old hat. At least for the ones frequently playing their gaming consoles, whereas sound, volume, rhythm, or tunes change in response to specific events in the game.
Philip Glass, regarded as one of the most influential composers of the late 20th century, once said in an interview, interactive music for him is when a musical piece offers different musical directions or endings when the listener can actively choose the way a composition evolves over time.
Now imagine bringing this to real life. Wouldn’t that be amazing?
Adaptive Music as a Digital Business?
While a conventional, classic ‘linear’ music track always sounds the same, adaptive compositions will always sound a little different when being played. There might be similar textures and musical elements in the mix but the way they’re interwoven with each other and the way the listener triggers certain elements make for a completely new experience, every single time. Why? Because the data snapshots used for the music generation will never be the same. I find that pretty intriguing.
At Porsche Digital, we see the potential of adaptive sound. Simply configuring an audio signal through an equalizer, theme-based playlists or BPM synced workout music was yesterday. More than ever we expect a much better personalization of music overall. I am sure it will become an integral part of our future in-car entertainment experience and certainly in our everyday life.
It’s no secret, we will see more e-cars out on the streets soon, cars with no combustion engine sound, so there is an ‘experience void’ to be filled. The driver can become the creator of their own sound experience. The way they drive, accelerate, brake, going up winding roads or through city traffic, will impact how a musical piece will evolve.
Now, imagine location-based features on top, listening to specific adaptive soundtracks only available in distinct regions in the world. The best thing, you can even take that experience with you, when you’re going for a walk because it will run on your mobile phone as well.
Predictive Sound: music like Water
What about predictive sound? Imagine, you get into your car and the system knows you’ve had a hard day. It knows most of your routines and listening preferences and can generate the right mix of music to accompany you on your way home. Parts of it are still future — but some are not. There are ways to implement a few of these features already today.
The first time I had the chance to get my hands on a couple of interactive music projects was more than 15 years ago when the topic was still understood kind of differently. I did lots of sessions with musicians around the globe to learn about their creative process, ideas and ultimately to gain feedback to music projects that have never left the labs of research facilities, thus sadly never saw the light of day.
What I have learned is that the most common reason why those projects never go-live is the business model or better, the lack of it. Would people spend money on such a sound experience? Would they even want adaptive music in their everyday life? The market was always considered too niche, too small, too special. No money to be made. The appreciation of music began to not correlate with our willingness to pay, in parallel to the rise of streaming. “Music Like Water” was the title of a book I have read back then, trying to understand the thin line between “everything is free” and the creative work that is still to be valued.
To add to that, I have started an almost three-year-long journey back in 2009, in order to create an interactive music app. The idea was the same: data “translated” into meaningful musical structures. But there were lots of challenges, technology wasn’t ready to provide for instance a fine granular high-resolution GPS in mobile phones. So the initial concept needed to be adjusted and ultimately, stripped down.
Ironically, today's biggest restrictions in generating real adaptive soundtracks are still technological limits. Don’t get me wrong, we’re more advanced than ever but the real deal I am talking about is real-time sequencing and sound synthesis in the cloud. If we manage to reach that stage, then it gets far beyond exciting; because then you don’t have to live with pre-composed musical pieces altered according to specific data inputs. You can take the same data but instead of triggering fixed musical elements, you can generate and stream them on the fly. Bandwidth and cloud DAW power will be doing the trick.
On the other hand, AI-generated music still has some way to go, while a human composition will always be a human composition. The artistic work, that one initial idea, that one spark will always be the main driver in any creative process. There is no substitute. If asked today, I would say the future could be hybrid, half man — half machine-made, perhaps.
When will Music become Intelligent?
We do have so much intelligence already integrated into so many areas of our daily life, that we now expect the same predictive intelligence for music. But it’s not there. Not yet.
We’d need systems to continuously learn about listening preferences. The simple use of a “skip button” tells so many things, but do we know what exactly? Do we dislike that specific track? Do we dislike the track just now but want to listen to it tomorrow? Do we enjoy the artist at all?
While thinking about how to approach this topic, I still find myself fiddling around with playlists and artist recommendations not meeting my expectations. As long as the “80/20 Pareto rule” applies (20 percent of the catalogue makes for 80 percent of the revenue), some might not see the urge for immediate improvement.
Technology and interactivity are only two constraints of a bigger equation. My interest is to investigate new forms and formats for music, to be enjoyed in a completely new way because after all, it’s always going to be the content that matters most.
As Philip Glass once said: “Interactive music will come in many different forms.”
I couldn’t agree more!