Spotify ML Day – Coverage

August 22, 2018 Published by Nicola Bortignon

Spotify presented its first Machine Learning Day at Spotify headquarters in Stockholm on Monday 9th July to coincide with the International Conference on Machine Learning starting the following day. The ML Day brought together 150 researchers and engineers from Spotify and the wider community around the themes of music understanding, generation, and recommendation. We explored and discussed cutting-edge questions such as “how to learn a representation of a song?”, “how to generate coherent music?”, “what does a modern recommender system look like?” and “how to avoid filter bubbles in recommendation?”.The technical agenda comprised two internal speakers from Tech Research at Spotify and four external speakers from Microsoft Research, Google AI, Pandora, and Criteo.The event started with a warm welcome from Oskar Stål, the VP of Consumer Engagement at Spotify. Rishabh Mehrotra then gave an excellent overview of the machine learning work happening at Spotify.

James McInerney of Spotify then presented the latest research conducted in collaboration with Spotify’s homepage recommendation team. The proposed approach uses contextual bandits to personalize explainable recommendations (“recsplanations”). They use importance sample reweighting to undo the bias introduced by a deployed recommender for counterfactual evaluation and training.

Next up, Claire Dorman from Pandora presented the ensemble methods used at Pandora to pull in many contextual and item attribute signals to recommend music. A key signal is the Music Genome Project that finds the descriptive features of every song in their item set.Clément Calauzènes from Criteo continued the theme of importance sample reweighting with his research into methods for reducing the bias of reweighing estimators for offline A/B testing.

Rounding out the counterfactual theme, Adith Swarminathan from Microsoft Research presented his work on deep learning from logged counterfactual bandit feedback.Rachel Bittner from Spotify presented her work on inferring musical transcriptions from the audio signals songs using a multi-task deep learning neural architecture. She founds that the multi-task model outperforms separate single-task models on tasks such as transcribing the vocal or bass line.

Google AI researchers Anna Huang and Curtis Hawthorne gave an overview of the latest work by Google Magenta. The presented a variety of neural architectures for generating realistic music piano rolls. Of particular interest was their model MusicVAE which can interpolate naturally between any two samples of music.

The day was very enjoyable and had a collaborative spirit bringing together world experts across multiple industrial research labs. The event was hosted at Spotify’s new offices in Urban Escape, Stockholm. Drinks were enjoyed after the event on the roof bar of the office overlooking Stockholm on a gorgeous July day.


Tags: