Spotify ML Day – July 9th 2018 – Stockholm

June 29, 2018 Published by Spotify Engineering

We are happy to announce the first Spotify Machine Learning Day taking place Monday, July 9th, 2018, 9:30 AM – 5:30 PM at Spotify, Regeringsgatan 19, Stockholm, Sweden.

The event brings together leading researchers from the industry on machine learning topics related to music understanding and generation, recommendation, and counterfactual evaluation. We invite anyone interested in learning more about the theory and impact of machine learning to attend.

Agenda of the day

Opening Keynote –  François Pachet, Spotify
Using AI to help make good songs

AI has been used since the 50s to generate music. Recently, we developed AI techniques able to assist composers of mainstream music in the songwriting process, as highlighted by the Hello World album. I will describe the process behind such AI assisted composition and point to current directions of work.

Presenter Bio:

François Pachet has been director of the Spotify Creator Technology Research Lab since 2017. At Spotify he designs the next generation of AI-based tools for musicians. Prior to Spotify, François Pachet was director of the SONY Computer Science Laboratory Paris. At SONY he set up a music research working on interactive music listening, composition and performance. He conducted the ERC-funded Flow Machines project, during which he developed technologies for style imitation under user constraints. This project produced the first mainstream music title: “Daddy’s car”. He then created the label Flow Records, which released Hello World, the first music album composed with Artificial Intelligence. Hello World is the result of the collaboration between AI,  Benoit Carré (aka SKYGGE) and many other musicians. He is also a jazz guitarist, composer and performer of 2 music albums.


Claire Dorman, Pandora
Delivering Personalized Mood, Genre, and Activity Playlists on Pandora

We will describe the science that powers a new feature on the music streaming service Pandora: personalized playlists in 60+ genre, mood, and activity themes. We will provide an overview of the ensemble of 75+ recommendation strategies that drives many of the features on the platform. The playlists are made possible by tagging individual songs, artists, collections, and listeners with appropriate themes at scale. We use our Music Genome Project database as a critical supplement to feedback-based collaborative filtering (CF)

Presenter Bio:

PhD in data science with three years of industry experience developing machine learning models for music recommendation, plus six years of academic experience building statistical models of astrophysical systems. At Pandora, Claire is leading projects on semi-personalized recommenders, optimization in the low-feedback domain, and the Personalized Soundtracks feature.


James McInerney, Spotify
Explore, Exploit, and Explain: Personalizing Explainable Recommendations with Bandits

The multi-armed bandit is an important framework for balancing exploration with exploitation in recommendation. In parallel, explaining recommendations (“recsplanations”) is crucial if users are to understand their recommendations. We provide the first method that combines exploration and explanations in a principled manner. In particular, our method is able to jointly (1) learn which explanations each user responds to; (2) learn the best content to recommend for each user; and (3) balance exploration with exploitation to deal with uncertainty. Experiments with historical log data and tests with live production traffic in a large-scale music recommendation service show a significant improvement in user engagement.

Presenter Bio:

James is a Senior Research Scientist at Spotify, New York. He works on scalable probabilistic methods and recommendation. He was previously a postdoc at Columbia University and Princeton University and obtained a PhD from the University of Southampton on the topic of Bayesian models for spatio-temporal data.


Clément Calauzènes, Criteo
Offline A/B testing for Recommender Systems

Before A/B testing online a new version of a recommender system, it is usual to perform some offline evaluations on historical data to pre-select the most promising changes. An accurate evaluation methods is thus crucial to iterate quickly and keep the pace of improvement of the system. In the last decade, Counterfactual Reasoning offered an alternative to traditional ranking metrics when the observed feedback is implicit, but textbook implementation often suffers from large variance issues, making it practically unusable. We explore practical assumptions under which the variance of such estimators can be controlled to provide an offline evaluation method accurate in real-world conditions.

Presenter Bio:

Clement Calauzenes is a Research Scientist leading a research group at Criteo. He received his PhD in Computer Science from Université Pierre et Marie Curie in 2013, advised by Prof. Patrick Gallinari. His current and past research focused on learning to rank theory, recommender systems, counterfactual inference and computational advertising.


Adith Swaminathan, Microsoft Research
Learning from logged bandit feedback

Many of the most impactful applications of machine learning are not just about prediction, but are about putting learning systems in control of selecting the right action at the right time

(e.g., search engines, recommender systems or automated trading platforms). These systems are both producers and users of data — the logs of the selected actions and their outcomes (e.g., derived from clicks, ratings or revenue) can provide valuable training data for learning the next generation of the system, giving rise to some of the biggest datasets we have collected. Machine learning in these settings is challenging since the system in operation biases the log data through the actions it selects and outcomes remain unknown for the actions not taken. Learning methods must, hence, reason about how changes to the system will affect future outcomes. We will summarize recent advances in these counterfactual learning techniques, and demonstrate how deep neural networks can be trained in these settings.

Presenter Bio:

Adith is a researcher in the Reinforcement Learning Group at Microsoft Research AI. He focuses on principles and algorithms that can improve human-centered systems using machine learning and counterfactual reasoning.


Rachel Bittner, Spotify
Multitask Learning for Fundamental Frequency Estimation in Music

Fundamental frequency (f0) estimation from polyphonic music includes the tasks of multiple-f0, melody, vocal, and bass line estimation. Historically these problems have been approached separately and until recently little work had used learning-based approaches. We present a multi-task deep learning architecture that jointly predicts outputs for multi-f0, melody, vocal and bass line estimation and is trained using a large, semi-automatically annotated dataset. We provide evidence of the usefulness of the recently proposed Harmonic CQT to fundamental frequency. Finally, we show that the multitask model outperforms its single-task counterparts, and that the addition of synthetically generated training data is beneficial.

Presenter Bio:

Rachel is a Research Scientist at Spotify in New York City, and recently completed her Ph.D. at the Music and Audio Research Lab at New York University under Dr. Juan P. Bello. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. Her dissertation work applied machine learning to various types of fundamental frequency estimation.


Closing Keynote – Anna Huang, Curtis Hawthorne, Adam Roberts, Google Magenta – Google AI
Exploring the role of ML in music & art

The Magenta project launched two years ago to explore the role of machine learning in the process of creating music and art. We will give you an overview of what we have learned, including deep dives on our most recent projects in generative modelling and transcription. We will also describe how you can get involved.


How to get there

Spotify Office, Regeringsgatan 19, 111 53 Stockholm


Tags: