Currently, although it still seems distant to us to program something as complex as the human mind, we are experiencing a tremendous advance in the use of Machine Learning, and, for a few years now, specifically in Deep Learning. Both encompassed in Artificial Intelligence, which was devised to make machines smarter, even more than humans.
But what is Machine Learning? Well, in its most basic use, ML is the practice of using algorithms to parse data: to learn from it to make a subsequent prediction or suggestion about a particular problem. For that purpose, programmers must refine algorithms that specify a set of variables to be as accurate as possible for a particular task, and, of course, the machine is trained using a large amount of data giving the algorithms the opportunity to be refined.
Since the dawn of early AI, algorithms have evolved with the goal of analyzing and obtaining better results: decision trees, inductive logic programming, clustering to store and read large volumes of data, etc. And, following the evolution of Machine Learning in the last decade, a particular Machine Learning technique known as Deep Learning has become more widespread. By definition, DL instead of teaching a computer a huge list of rules to solve a problem, it gives the computer a model that evaluates examples and small collections of instructions to modify the model when errors occur. Over time, it expects these models to be able to solve the problem extremely accurately since the system is able to extract patterns.
Now, DL is pushing all of us to another reality in which we are able to interpret our world differently through image recognition, natural language analysis, and audio production, as well as to anticipate many problems thanks to the extraction of behavioral patterns. Something that until then the Machine Learning we knew a few years ago did not allow us to do. Without a doubt, the audio world is one of those areas in which machine learning will have a lot of scope for action. In this industry, we produce a massive amount of data and it is key to make the best use of it in the name of effectiveness, constant improvement, and trending. As a matter of fact, it is already possible to implement these systems, but it will undoubtedly become a normal panorama in the coming future.
Let’s see how it can be possible.
Although there are different techniques for implementing Deep Learning, one of the most common is simulating a system of artificial neural networks within data analysis software. This is the basic principle. Sound data, often called audio data, is a type of data that is not very intuitive to work with, but is of course the raw material for developing valuable information that will, in turn, allow new technologies to be built. The big difficulty when starting with sound data is that, unlike tabular data or images, sound data is not easy to represent in tabular format.
Images can easily be represented as arrays because they are based on pixels. Each of the pixels has a value indicating the intensity of black and white (or for color images, it has an intensity of red, green, and blue separately). Sound, on the other hand, is a much more complex data format to work with. On the one hand, the sound is a mixture of wave frequencies at different intensities. A conversion into some kind of tabular or matrix data set is necessary before any machine learning can be performed. On the other hand, sound has a time factor. It is actually more comparable to video data than to image since it has sound fragments of some duration rather than a capture at a single point in time.
So, since sound, or music, is something we listen to, it’s hard to imagine how to make it digital. However, digital music is everywhere and it is a problem that has already been solved. That’s the first point to consider. What’s next? One way can be loading and playing a song with an interactive computing platform, such as Jupyter Notebook, which creates and shares computational documents. Then, it is about cutting the songs into equally long chunks and subsequently creating melspectrograms as well as a data format suitable for Kera input. But this is just an example.
It is very naïve to think that artificial intelligence is going to take over audio production overnight. In reality, the work of sound engineers is still essential in this field. What is about to happen is the progressive dependence of sound engineers on technologies such as machine learning, with the aim of processing large amounts of data, and thus improving errors, recognizing patterns, correcting distortions, and speeding up tasks that take up a lot of time for human workers. This would improve the audio industry exponentially… and it will certainly happen!
For instance, machine learning has facilitated the slow but continuous construction of smart speaker interfaces, as well as the use of artificial intelligence for the purpose of accurate and fast signal processing, among others. In this way, sound engineers can take care of other, executive decisions that are fundamental to production, such as source and channel coding.