Building a technical career path at Spotify

Spotify launched a career path framework for individuals last year. Since then, I’ve spoken to leaders at several other companies about it. This seems to be a bit of a hot topic, so I’ve decided to write about our model and how we arrived at it. Hopefully, this may be useful to your company. This […]


SDN Internet Router – Part 2

Introduction In the previous post we talked about how the Internet finds its way to reach content and users; how Internet relations work and what we do to make sure we can deliver music to you. We also introduced some of the technical and economical challenges that come with peering. In this post we will elaborate […]


SDN Internet Router – Part 1

Internet_map_1024

Introduction This is the first part of a series of posts about a project we have been working with for a while now that we call SIR (SDN Internet Router). To give some context to this we will first introduce how the Internet route packets, what peering is and how Spotify connects to the rest […]


A 101 on 1:1s

5355538978_4dcbc4eed8_b

“You just talk to them for half an hour.” That’s the guideline I got when I first joined Spotify for how to run my 1:1s — a recurring half hour meeting between a manager and their team members. It didn’t leave me with much to work with. I was admittedly clueless about the practice. I […]


ELS: a latency-based load balancer, part 2

failing-success-rate2

What to Measure? In part 1, we already mentioned a few metrics that should be considered by the load balancer. Success latency ℓ and success rate s of each machine. Number of outstanding requests q between the load balancer and each machine. These are the requests that have been sent out but haven’t received a […]


ELS: latency based load balancer, part 1

Load Balancing Most Spotify clients connect to our back-end via accesspoint which forwards client requests to other servers. In the picture below, the accesspoint has a choice of sending each metadataproxy request to one of 4 metadataproxy machines on behalf of the end user. The client should get a quick reply from our servers, so if one machine becomes too slow, it […]



Monitoring at Spotify: Introducing Heroic

monitoring-heroic-banner

This is the second part in a series about Monitoring at Spotify. In the previous post I discussed our history of operational monitoring. In this part I’ll be presenting Heroic, our scalable time series database which is now free software. Heroic is our in-house time series database. We built it to address the challenges we […]


Monitoring at Spotify: The Story So Far

monitoring-banner

This is the first in a two-part series about Monitoring at Spotify. In this, I’ll be discussing our history, the challenges we faced, and how they were approached. Operational monitoring at Spotify started its life as a combination of two systems. Zabbix and a homegrown RRD-backed graphing system named “sitemon”, which used Munin for collection. […]


Improving the accessibility on our iOS client

Story Lots of the UI of our iOS application is rendered through an internal framework called Ceramic. It’s a tool that allows us to stitch together collection views with different layouts while keeping it memory efficient and covering the usual meta tasks like logging, loading and error handling. It was first used in the New Releases […]