The first time I heard Oxygene was on a set of headphones in 1977. Strange eerie sounds produced by instruments different to than anything I had heard before. At the end of side 1 of the record I realized something unusual. No-one had ever produced music like that before. I was unable to get out of my chair. Maybe unwilling was more accurate. I was floating, completely in a trance. I had no previous experience to compare this with, but I instantly knew one thing for certain; I had found a solution to my issue with asthma.
I had my favourite music genres and artists like any other teenager, but I needed Jean Michele. This unusual “synthesized” music became a form of self hypnosis for me, and the beginning of a lifelong interest in this new music style.
It is easy in hindsight to see why synthesized music became linked with AI. Unlike other traditional instruments, without the flow of electrons, no sound could be produced. This was considered artificial music, and rather than seeing it as a flaw, many groups exploited that perception. Artists became rigid and emotionless in performances to enhance the audiences experience.
A new type of dance emerged along with the music which saw young people display amazing feats of kinetic intelligence while emulating robotic movement.
While cognitive computing and machine learning seem more likely to be the future of AI rather than the occurrence of The Singularity, there is one thing that I would want, should it occur. I want to hear the music. I want to hear a thinking machines version of the music which we have been simulating for almost 40 years. I want to hear synthesized music created by real artificial musicians.