made in collaboration with aphex twin, the midimutant learns how to program your dx7 synth so you don't have to. equipped only with a microphone input and midi output, the midimutant runs on a raspberry pi and uses artificial evolution to grow new sounds on hardware synthesisers that mimic an example sound you provide.
we have an article in the january 2018 magpi magazine and richard also talks more about this project in his interview with synth designer tatsuya takahashi.
at some point we will release source code and instructions on how to build your own. for the moment there are some technical info and notes on the hardware and pi setup here.
how it works: every sound in a population of initially random patches is sent and auditioned via sysex midi messages, sampled and checked for similarity using mfcc analysis. the best patches are chosen to form the next generation using the sysex patch data as genetic material, converging (most of the time) on similar sounds. unlike a neural network or machine learning algorithms, the artificial evolution does not need to model the underlying parameter space - i.e. how the synth internally functions to create sound. midimutant can therefore be used on any synthesiser with a documented sysex dump format.
some conceptual background can be found in this paper (although the midimutant is a more naive and freeform approach to the same problem):
andrew horner, james beauchamp, and lippold haken, "machine tongues xvi:
genetic algorithms and their application to fm matching synthesis,"
computer music journal, 17:4, pp. 17-29, winter 1993.