Mienophone

A digital synthesizer using facial expressions to modulate sounds

mienophone.jpg

Mienophone is created by Zhe Wang and Daniel Heitz in collaboration, since 2019.

Music is controlling our emotions. It sometimes feels a bit like being a string puppet, while music is the puppet master. It can make us dance or sink into sadness.

As Machine Learning Algorithm is becoming better at reading human emotions, can we use this technology to reverse this power of control? How does music sound like if it was directly controlled by our expressions and not the other way around? How would we react? Would it perhaps reinforce the emotion? Can technology help us to become more aware and better understand our emotions? And how does all this challenge the purpose of art and the role of the artist in the future?

These are the questions the artists explore with the Mienophone. In the exhibition, the artist duo is having a dialogue through different media: the Mienophone installation, video works and photos.

The project is funded by KARIN ABT-STRAUBINGER Stiftung

Technical support: Berlin Glassworks

DSC03219.jpg
DSC03261.jpg

About "Mienophone"

Mienophone is an installation consisting of glass sculpture and a software synthesizer developed in MAX, a visual programming language for music and multimedia. A camera is recording a live video stream of the viewer. Selected frames of the video live stream are transmitted to the Microsoft Cognitive Services Vision API.

The Machine Learning Algorithm returns a numeric value for a set of emotions detected in the player's face in the video. The emotions detected are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. Each value is used to control a set of parameters of the software synthesizer and allows the viewer to influence the sound by expressing emotions.

The camera is placed in an organic-shaped glass sculpture. Since the glass surface is not flat, the camera's line of sight will be distorted, so the Machine Learning Algorithm will also read the viewer's image distorted. This also reminds us that whether Machine Learning Algorithm can see us clearly, does AI and humans really see each other? Is the "seeing" equal?

The shape of the glass is symbolically presented in a form similar to a human face; just like our imagination of AI, it is a human-like, inhuman subject. When faced with this infant-sized, fragile, human-like device, we will regard ourselves as parents of this new intelligence.
The main element that composes glass is silicide, and computer chips are the same. The history of glassmaking can be traced back to BC, far before the invention of iron melting technology. Although computer chips have only be invented for a few decades, the powerful driving force they provide to the development of human society cannot be quantified.


 The development of Machine Learning Algorithm has been closely linked to the future of humankind. In the most optimistic scenario, we may be able to create a utopia where humans and superintelligence can coexist peacefully. Or, the omnipotent AI has mastered higher intelligence than human beings. Human beings live in a society controlled by AI, lose their creative ability, and only lament their own destiny.