We’ve been intrigued for a while about the intersection between artificial intelligence and creativity. Are we available to automate, or intelligently do the creative works by pressing a button? Actually, the button already exists (Dredge, 2017). On Jukedeck’s website, a personalized song can be created simply by inputting what genre, mood, tempo, instruments and track length you want. Jukedeck’s system will sell a royalty-free licence of the tailored track to an individual or small business for $0.99 or $21.99 to a larger company. The input of the users can be expressed in other forms, like AI Duet, the music service launched by Google, getting people to play some piano notes, then the AI generates its own melody responsively, while another service Humtap Music analyses human vocals to create an accompanying instrumental. The generative music app Scape allows the users to combine background, colours and shapes to start the auto-generation of the music as their soundtrack.

The latest step forward in AI-generated music sphere is the release of “Break Free,” the first single from the human-AI collaborative album IAMAI in 2017 Aug, for which all the music is entirely written and produced by artificial intelligence. It is the collaboration between YouTube star Taryn Southern and Amper, a company that has developed a web-based service that uses artificial intelligence to generate professional sounding music in seconds. The only manual inputs Amper needs are beats per minute, rhythm, mood, and style of music. For example, you can churn out “exciting” classic rock or “brooding” ’90s pop, and swap out different virtual instruments and change parameters like tempo and song length. Taryn used the Amper AI to entirely compose the music, then penned her own vocal melodies and lyrics, which she recorded and added to Amper’s music to make the final product.

The music that Amper creates is not for winning any Grammys or showing up on the chart, it is more akin to a stock music library that providing an amount of simple, inorganic-sounding songs without royalties and licensing problems. Today, Amper is announcing an integration with Adobe’s Creative Cloud and the launch of its public API. The Adobe video-editing software can be equipped the feature with generating songs for their users’ self-produced films. For those businesses and small content creators has a tight budget, the songs generated by Amper could easily utilize as the background music for personal creative works, advertisements, games and any other environments. The position of Amper and other similar music services are not for the computers to replace composers, but rather to make these computer programs to be a helpful tool that human artists want to create music with them.

AI-human collaborative music is definitely a burgeoning sector. No matter the businesses or researchers are trying to answer the same question: can machines do not just make random decisions, but to understand the meaning of the request and make a decision within that context? “If machines are still doing what they always did, and that’s what we tell them to do.” Composer and AI researcher David Cope said (Cills, 2017). “When you hear someone say my work is ‘computer composed,’ it isn’t — it’s computer assisted.” Now, Programs like Flow Machines from Sony’s Computer Science Laboratory (CSL) in Paris and Google’s Magenta project change the terms of this debate by using machine learning techniques, including recurrent neural networks and reinforcement learning, to instead of using hard-coded rules. They are capable to teach themselves to recognize patterns using machine learning algorithms, and then make autonomous decisions based on those patterns without being programmed explicitly.

The ambitions of Magenta is to design algorithms that learn how to generate art and music, potentially creating compelling and artistic content on their own. The process is practised by reinforcement learning method, the computer is trained upon a wide range of musical styles as the dataset, producing their own works with the human artist’s rule and feedback. It enables artists to directly control the flavour of the output. In the long term, Magenta wants to advance the state of the machine-generated art and build a community of artists around it, while the short-term goal is building generative systems plug in to the current tools artists are already working with, providing the platforms that help artists connect to machine learning models.

Sony’s Flow Machines has the similar goal, research and develop Artificial Intelligence systems expert at generating music autonomously or being in collaboration with human artists (Flow Machines, 2017). When a musician wants to compose a new song, firstly he set the program select the specific music style, from individual composers like Bach or The Beatles, or a set consist of different artists, or, of course, the style of the musician who is using the system. The program runs an analytical model known as a Markov chain that identifies patterns in those selections and then imitates and varies them to create its own original composition. The computer calculates the probability of certain chord progressions, melodic sequences, and rhythms, and uses these probabilities to generate new, plausible variations. This module even identifies the melodies that a musician is playing in real time, recognizes the style in which they’re being played, and generates new paragraph that fit the corresponding style. Flow Machines has unveiled their first result, the artificial intelligence-composed pop song called “Daddy’s Car”, in September 2016. It was composed based on selections of 60’s psychedelia of The Beatles style melodies, and a database of more than 10,000 diverse sheets. Lyrics of the song were penned by a human composer, who also helped arrange the AI-generated segments of music.

These methods putting real, live creators front and centre, emphasizing the potential for post-human collaboration. “I think it’s an iterative process. Every new technology that made a difference in art took some time to figure out.” Douglas Eck, the team lead of Magenta said when he was asked about the perspective of whether human-machine collaboration can be more creative. (Hutson, 2017) “I love to think of Magenta like an electric guitar.Rickenbacker and Gibson electrified guitars with the purpose of being loud enough to compete with other instruments onstage. Jimi Hendrix and Joni Mitchell and Marc Ribot and St. Vincent and a thousand other guitarists who pushed the envelope on how this instrument can be played were all using the instrument the wrong way, some said—retuning, distorting, bending strings, playing upside-down, using effects pedals, etc. No matter how fast machine learning advances in terms of generative models, artists will work faster to push the boundaries of what’s possible there, too.”

It’s quite philosophical while all these startups and research teams grappling with fundamental issues of creativity and humanity. Reviewing the thinking behind AI-generated music might give us an insight into how the human composition process works. It is always hard to define the connection between human emotion, inspiration and the creation, but building artificial intelligence systems starts to ask questions about how the same system works in the human brain. While the artistic output of AI and human-created music is more and more indistinguishable, will these studies make those human brains be in danger of being replaced by machines? In this stage, AI-generated music still hard to move people as the way human music does, to jump up and dance, to cry, to smile. To move people requires triggering emotions and it takes an understanding of the context why human emotions are triggered. If AI can learn to at least mimic human emotions then that final frontier may be breached. But that is a long, long way off.

 

Dredge, S. (2017). AI and music: will we be slaves to the algorithm?. the Guardian. Available at: https://www.theguardian.com/technology/2017/aug/06/artificial-intelligence-and-will-we-be-slaves-to-the-algorithm (Accessed 23 Nov. 2017).

Cills, H. (2017). Can AI Make Musicians More Creative?. MTV News. Available at: http://www.mtv.com/news/2983208/ai-artificial-intelligence-music/ (Accessed 23 Nov. 2017).

Flow Machines. (2017). Flow Machines: AI music-making. Available at: http://www.flow-machines.com/ (Accessed 23 Nov. 2017).

Hutson M.  (2017). How Google is making music with artificial intelligence. Science | AAAS. Available at: http://www.sciencemag.org/news/2017/08/how-google-making-music-artificial-intelligence (Accessed 23 Nov. 2017).