The American company, Google, came back to surprise. Although not new to its active and affirmative position in the universe of new technologies, this time the company presented a new touchscreen synthesizer. This is an alternative to traditional synthesizers that combine waveforms to generate sound. On the touch screen, the synthesizer is assisted by AI and uses the NSynth machine learning technology to interpret and translate a variety of ring tones and generate new sounds.
This technology – NSynth – allows the Google synthesizer to record sounds as numbers to later produce a new set of numbers mathematically after parsing the original set synthesizer. The synthesizer then covers the newly-recognized number of sound numbers in the sound, producing sounds that are new and unwanted. Synthesizer hardware allows your users to transition between four X / Y pad parameters to play and sequence sounds via MIDI while “morphing between real-time sound sources”. Although all this seems confusing, the result is incredible. Only by touching the screen can you create an endless set of sounds, and for the more skilled, music.
Although Google does not plan to market the AI synthesizer, the good news is that this synthesizer is accessible to anyone. Those interested can try the NSynth technology in the web version of the Google synthesizer and, after download, will still have the ability to add their own resources.
Watch the video below and check everything out.