Welcome to Part 2 of my journey through the artistic side of Data Science. Here are Part 1 ICYMI!
Music is life. I've believed that for as long as I can remember. Music can make or break a mood, make you more productive, make you relax, make you lift more at the gym, you name it!
That's why AI-made music is particularly interesting to me. Let's have a look at some of the amazing machine creations out there.
Google is Bach
Keeping with traditions and starting with Google.
For Bach's 334th birthday, Google's Magenta project built the famous
Bach Doodle - a mini game which would harmonise any user given input in the style of Bach. What's behind the doodle? The
coconet model.
Magenta team built and trained the coconet on 300+ chorale harmonies written by Bach. The model randomly erases notes and then uses a Convolutional Neural Network to regenerate them again.
The cool thing about coconet is that it doesn't need to follow a chronological order to produce notes, unlike many other machine learning models dealing with music. It can generate notes at any point of a composition, and continuously!
In fact, that's exactly how coconet works internally - it repeatedly generates a harmony, gets rid of unfitting notes, then generates new ones in their place. So the model loops until it believes that all notes in the harmony are a good much and can do Bach justice
☝️Coconet repeatedly erasing and rebuilding it's own harmony
Play something for us, Clara
Clara can play the piano. I've heard her play jazz, classical and chamber music myself. I've just never seen Clara.
That's because Clara is a Long-Short Term Memory (LSTM) - a Recurrent Neural Network that is great at recognising patterns in sequences, not just single objects.
Christine McLeavey Payne created Clara using midi music files and the
FastAI library. One of her biggest breakthroughs was viewing music as a language. Same as machine learning models that work with text prediction when you type, Christine built Clara by turning her music library into text and feeding that to the neural network.
This gave Clara the ability to generate music both "chordwise" and "notewise" - imagine that as being able predict not just character by character, but word by word. That gives Clara much more "intelligence" when creating music, hence the LSTM that can make decisions both on recent memory (one note) and long-term memory (what is the chord being played).
Put your circuits up!
This one is one of my personal favourites - the
AI DJ. Created by the cool Japanese collective Qosmo, this chill looking bot can work side by side with a human DJ by selecting music, beatmatching the songs and watching how is the crowd reacting.
The AI starts with a very sophisticated music selection process - three neural networks working together, inferring the drum beats, instruments and genre of the song the human DJ is playing. After the AI figures out the current song, it looks at the nearby cluster of other fitting music and selects one of those.
A human then places the vinyl on the turntable, so no worries, we still have our jobs yay!
The AI DJ then matches the tempo of the two songs. Qosmo built a model using Reinforcement Learning for that - the machine learning model has a "target score" and it keeps doing trial-and-error learning until it is able to beat that score.
Finally, when the song is matched, the AI (like any good DJ) starts watching the crowd's reaction - using motion sensors, deep learning and the
OpenPose library to detect the amount of movement in the crowd. If the movement is below the designed threshold, the AI DJ "pumps it up" by adding some extra sounds to the song!
Bonus: Machines listening to music???
This is the piece that started my journey into artistic data science.
It's not exactly "AI singing", but it is uniquely beautiful and you must have sound on while you experience what Xander Steenbrugge created.
Neural Synesthesia is the marriage of data science, sound and visuals, co-written by human and AI. Xander trains deep learning models on a specific drawing style, until the model is able to produce its' own artworks - similar to the training data, but still totally unique.
Xander then processes music of his choice to extract features from it that a machine can understand. The deep learning model then listens to the music and expresses itself in painting. Yes, the AI uses input from the music to produce original output in a visual form, and it's absolutely beautiful.
----
Thank you for reading! I would love to hear what you think, so feel free to ping me or leave a comment.
If you are working with music, production, creative projects and have a knack for data, do check out the
Le Wagon Data Science Bootcamp. In 9 weeks you can start your journey to build the next AI musician!