Aura Visualizer

PURPOSE

As more virtual display screens appear in public spaces to advertise various products, it offers the opportunity to use these displays as canvases for computer generated art. These interactive displays are able to induce emotions and provoke thought by allowing people to manipulate the graphics or installation with their bodies and other physical features. The purpose of this project is to visually investigate and demonstrate the effects that our emotions have on others.

CONCEPT

The Aura Visualizer generates graphics that are relevant to the facial expression of a person’s face who is interacting with the exhibit. This is achieved by using a Kinect V2 sensor, which can detect facial expressions. Depending on the facial expression of the participant(s), the exhibit generates different graphics. For example, someone with an angry expression will generate a wave formation that appears red and slightly more aggressive (in terms of wave radiation speed and shape). If a person has a happy expression, then they will generate a similar appropriate representation of their expression or ”emotion.” These generated graphics or “auras” interact with each other in a meaningful way. Negative emotions adversely affect positive emotions. This was done by changing positive aura’s colours and the way it looks. For example, a “sad” aura interacting with a “happy” one would slow the “happy” aura down, make it more of a blue hue and make it emulate the same effects of the “sad” aura. By demonstrating how overwhelming emotions can affect those around them, it provokes the thought that everyone’s actions, even if subtle or indirect, have a lasting effect on those around them.

DEVELOPMENT

The exhibit’s technical requirements included both hardware and software. On the hardware side, we used a Microsoft Kinect V2 which has the ability to track skeletal information and to read the facial features of up to six people. In terms of software, we used Processing and Thomas Lengeling’s Java library which was used for facial tracking, mapping vertex points to the faces, skeleton tracking, depth mapping, and tracking the number of total users on screen. To calculate weather someone is displaying an open posture or closed posture, we wrote code which would determine the wingspan of each person in view of the sensor. The size of each person’s aura is determined by the wingspan value. An additional feature to this exhibit is revealed when everyone within view of the sensor is showing the same emotion. If everyone is smiling, joyful music begins playing; however, if everyone is angry, an intense drum beat begins to play.