Audiolizing Body Movement

Source

Naoyuki Houri, Hiroyuki Arita, Yutaka Sakaguchi. 2011. Audiolizing body movement: its concept and application to motor skill learning. In Proceeding AH ’11 Proceedings of the 2nd Augmented Human International Conference Article No. 13 Tokyo, Japan — March 13 – 13

Summary

This article deals with a project that transforms the posture/movement of the human body or human controlled tools into acoustic signals and feeds them back to the users in a real-time manner.

The author says that sound effect plays a very big role in our common lives and events: many artificial system such as Video game, Cell Phones, etc, have been proposed for displaying information through auditory channel. (it is common to add sound effect for enhancing realistic sensation, or to give a stronger image and signs through auditory events, which let our brain learn their correlation better).

However, body information such as posture, movement and muscle forces are insensible. And that is why audiolizing these gestures can be effective to enhance and sense our movement better. It is also another method to compare the body states of different individuals or identical person with different occasions.

Practice

  1. Asistance of Soldering Work (audiolization system measure temperature)
  2. Audiolization of Calligraphy (a 6-axis force torque sensor attached to a brush)
  3. Pole Balancing Game (3D posture sensor)
  4. Acoustic Frisbee (3D acceleration sensor)

       

 

Relevance

If we want to work with different sensors as feedbacks system to understand our gesture better (signal and sign of our gestures and movements), this article might be helpful to see how they used different sensors within different subjects and occasions. We can use it as inspirations.

Kinesonic Approaches to Mapping Movement

Source

MOCO ’15, August 14 15, 2015, Vancouver, BC, CanadaCopyright is held by the owner/author(s). Publication rights licensed to ACM.

ACM 978-1-450334570/15/08…$15.00
DOI: http://dx.doi.org/10.1145/2790994.2791020

Summary

This project introduces RAKS system (Remote electroAcuoustic Kinesthetic Sensing) that has been played through a belly dancer.

Sensor technologies translate the internal experiences to external, which means this system will be integrating movements from the dancer (kinetic) and sonic elements together by a wearable wireless sensor that is specifically designed for belly dance movement. In this project, LilyPad Aruduino, ADXL345Accelrerometer, Flex Sensor, and LED Rings are used. The Mapping Strategies are modeled on the relationship between playing techniques and acoustic instruments.

Translations of major movements from the dancer to the instruments:

  1. Contraction and Release: Bow Pressure
  2. Curving and Straightening: Modulating Waveshape
  3. Accelerating and Decelerating: Pulses to Pitch
  4. Movement and Stillness: Oscillators

 

Like it says, the electro music is played only depending on the movement of the bally dancer. Instead of composing electro music / sound on computer, the dancer makes each movements throughout her torso, chest, hip in order to create the sound of its own. While listening to the music, we can see how each movement of the dancer affects different sound and speed. Normally in a stage or dance performance, dancers create the movement throughout how the music is played. But in this project, we can see how dancers and music become one and integrated as a one piece at the same time.

 

Relevance

The fact that the article deals with the dance movement and its interaction between sensors on the body to the audio system, we can relevant this project to our own and have it as an inspiration in case one of our teams wants to work on RAKS system.

 

 

A Compact, High-Speed, Wearable Sensor Network for Biomotion Capture and Interactive Media

Source :

Ryan Aylward and Joseph A. Paradiso. 2007. A compact, high-speed, wearable sensor network for biomotion capture and interactive media. In Proceedings of the 6th international conference on Information processing in sensor networks (IPSN ’07). ACM, New York, NY, USA, 380-389. DOI=http://dx.doi.org/10.1145/1236360.1236408

Summary :
The Article, „A Compact, High-Speed, Wearable Sensor Network for Biomotion Capture and Interacctive Media“, is about a wearable sensor technology for multiple dancers (and professional athlets). Therefore it is important to achieve low latency and high resolution as well! As well, it is important to keep battery life low and the wearable wearable. Interesting is to see, whether it is possible to receive information, a video motion technique is not able to capture. The article shows in detail sensor strategies, the different wireless platforms and several hardware-details. Interesting is that they use the nRF2401A, which Lucas is going to check out next week.  In Feature extraction, they rather focus on the influences of dancer to dancer and group dynamic. A problem ist, that there are so many ways, one could analyse interpret one dancer, that it gets even harder to find a clean mapping with a group of dancers. To convert the collected data into sound or video, they simply record it and playback it several times into Max/MSP, to find good mappings. To sum up, they found a way to collect low latency, high resolution data technically, only the interpretation and meaningful output could be improved.

Relevance for our project :

It looks like the nRF2401A could be quite useful for our project and it seems like their technology, could be helpful, whenever we have problems at some stage between input and output. Even though I don’t think, we should start with a dance ensemble. We should better focus on one actor.  We should consider using Max/MSP  to manipulate audio maybe.

Enabling the VJ as Performer with Rhythmic Wearable Interfaces

Source:

Andreas Zingerle, Tyler Freeman. 2011. Enabling the VJ as Performer with Rhythmic Wearable Interfaces. In MM ’11 Proceedings of the 19th ACM international conference on Multimedia Pages 765-766, Scottsdale, Arizona, USA — November 28 – December 01

Summary Andreas Zingerle and Tyler Freeman, authors of this article talk about an experimental wearable controller called VJacket. This jacket is compound with several sensors that detect body movements (bending, touching, hitting) and can send the information to the VJ program.

The main function is to manipulate visual output in a rhythmic way. Sensors are the best option for this because they are small (some are flexible) and can be placed in the body a difference to a mouse and a keyboard that are very rigid. Also sensors in the body are more precise than a mouse if we talk about making rhythmical sound.

With the VJacket the performer will be able to control a video just using his/hers body movements. The authors talk about the maracas-based “rhythmism“, which is a project were the instrument becomes a performance tool. Depending on the speed, and the way it moves, the video may change. The authors are convinced this technology is the future for Karaoke bar, Rock bands and DJs performances (Djs are going to be able to move and walk around the nightclub).

In this article the authors mention that they designed their own Arduino software (Arduino2OSC) to have the chance to use more than one sensor. It is a very interesting software because with it you can adjust the values of the sensor because little by little the can get a little bit damaged after each performance. To avoid replacing them, you just have to change the valued in the code.

Relevance for our project: This article is relevant to our project because it give us the option to explore with Arduino2OSC and think about attaching the sensors in clothes instead of directly in the body. It is also a very interesting article that gives us some inspiration for the project.

Future circus: A performer-Guided Mixed-reality Performance Art

 

Source:

Hsin Huang, Hsin-Chien Huang, Chun-Feng Liao, Ying-Chun Li, Tzu-Chieh Tsai, Li-jia Teng, Shih Wei Wang. 2015. Future circus: A performer-Guided  Mixed-reality Performance Art.  In UbiComp/ISWC’15 Adjunct Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, Pages 551-556, Osaka, Japan — September 07 – 11, 2015

Summary The article “Future circus: A performer-Guided Mixed-reality Performance Art” introduces a mixed- reality performance art using motion capture and wearable computing. With this technology the performer and the virtual characters are able to interact to each other in real-time. The virtual effects are pre-made animations and can move with the help of other performer who is wearing motion capture devices.

The authors mention that for them is important to show the performer skills and not only what his/hers “clone” is doing. In other words to have them both (performer and animation) be part of the show. The topic of this performance is also important to mention. It is about a special circus. It is special because the story takes place in a world where the animals are extinct and the  “animals” shown there are made of animal’s remains. Their souls are trapped in the body of a human being. Thank him the remains can still move, he controls them. But the circus master has the human locked in a cage. Then a little girl finds out what is going on and she helps the animal souls to heal trough dancing and at the end they go in peace to heaven.  The performance is very interesting because touches topics like, animal abuse, environment, etc.

Visuals are very important in this performance, they use colors to enhance the feelings of the audience. All the performers wear motion sensors (live performers AND the performer hiding in the backstage), that means all of them have an influence in the animation. All the motion data are transmitted in real-time. For example, when the performer spins, the light in the animation get brighter.

The technology behind is called WISE-Platform. And is a low cost technology with incredible results, however fitting the performers to animal animations still not satisfactory. The exciting part of this technology is that the animation is not only able to mirrors the performer but also to react and interact with the other performers.

Relevance for our project: This article is relevant to our project because it give us the option to experiment (if possible) with the WISE-Platform and if it is really low cost as the article says we can be able to make something amazing. Also the fact that they have such a nice story behind motivates me more to find a serious topic to work on and make people conscious about some important think that is happening in the world.

 

Evaluation on Performer Support Methods for Interactive Performances Using Projector

Source: Authors: Jun Ikeda Kobe University, Kobe, Japan Yoshinari Takegawa Kobe University, Kobe, Japan Tsutomu Terada Kobe University, Kobe, Japan Masahiko Tsukamoto Kobe University, Kobe, Japan

MoMM ’09 Proceedings of the 7th International Conference on Advances in Mobile Computing and Multimedia, Pages 105-112, Kuala Lumpur, Malaysia — December 14 – 16, 2009 ACM New York, NY, USA

Summary Lately the performances are getting more attention when they use computer technologies. Performances with a person having some sort of interaction with projections are very entertaining to watch.

This article talks about the evaluation and experimentation of interactive performances using a projector. The goal of this article is to improve the way the performers are interacting with the projections and to support them using display devices like HMD.

In the entertainment world, the interactive performances are much known and there is always the desire to explode their potential. The idea is always to improve the performance itself. But this research is about supporting the performer.

In this article, the authors try to find the main problems on stage so they classified the performances in two principal types: the first one is when the projection is basically a movie and the performer has to memorize everything in order to perform in time. And the second type of performance is when the projections are based on the performer actions.

The article also mentions the situations a performer can face WHEN the performance is: facing the audience, facing the screen, in parallel with a screen, in contact with a screen, far from a screen and when using part of the body. In most situations the performer has difficulties to see the entire projected image.

Some display devices (HMD, Monitor, Projection on floor, Earphone) where took into consideration and they mention they pros and cons. At the end they choose the HMD because is the most effective and they adapted it as a display device, additionally they added a wireless mouse to the experiment. They made performers play some games using the HMD and the wireless mouse in order to evaluate the recognition speed, understanding of the object position and timing recognition of changing images. At the end the results vary, sometimes the problem was the delay in displaying images.

Other evaluations were: to see the naturalness of the performer when facing the audience, when the performer is far from the screen and is using a real object and when he/she touches the screen. In conclusion they found out their method is effective but they want to improve it and try similar works but with more than one performer.

Relevance for our project: This article is very relevant to our project. I see this article as a piece of advice. I am sure this research can help us with the developing of our ideas and with finding/creating hardwares we may need.

A Compact, Wireless, Wearable Sensor Network for Interactive Dance Ensembles

Source: Ryan Aylward, S. Daniel Lovell, and Joseph A. Paradiso. 2006. A Compact, Wireless, Wearable Sensor Network for Interactive Dance Ensembles. In Proceedings of the International Workshop on Wearable and Implantable Body Sensor Networks (BSN ’06). IEEE Computer Society, Washington, DC, USA, 65-70. DOI=http://dx.doi.org/10.1109/BSN.2006.1

Summary The Article, „A Compact, Wireless, Wearable Sensor Network for Interactive Dance Ensembles“, shows their prototype to collect dance ensembles on body local dynamics, which can be transmitted in power-efficient real-time for musical feedback. Important is to keep their technology scale-able. Their should be a way to have at least more than 2 dancer being interactive with the stage. Furthermore they focus on body attached motion capture and not on  computer vision. They use the nRF2401 radio to transmit the data wirelessly.  Using that technology, they achieve a range of 50 feet and they are capable of 100Hz and 30Nodes. Their technology makes it possible to find simultaneously acting or call-response acting dancers and other group dynamics. They show some test-examples of their data and think about gesture classification, which will be as well part of future work.

Relevance for our project The article shows, that the nRF seems to be quite useful technology. I am excited to see Lucas work with the Radio in the upcoming week. In our case the focus is not that much on Dance Ensembles, so we are not that much interested to collect that much data, but it might be useful for future future works, in a project, that succeeds our’s. If we work with gestures, their work of detecting those can be interesting.

 

Moving Music: Exploring Movement-to-Sound Relationships

Source

Jan C. Schacher. 2016. Moving Music: Exploring Movement-to-Sound Relationships. In Proceeding MOCO ’16 Proceedings of the 3rd International Symposium on Movement and Computing Article No. 19. Thessaloniki, GA, Greece

Summary

“Moving Music” is focusing on learning how ‘Gesture’ can be used when working with sound, and real time generated music, and also how ‘Gesture’ influences the perception, affect and impact of music.

So it is important to learn inter-relationship and dependency between a musician and a dancer, between movement and sound through the use of electronic sound processes that are linked by technically sensing movements.

** dancer – movement – musician – sound = audience perceptions

In order to proceed this project, the dancer is equipped with two bracelet (wireless motion sensors) that capture degrees of Acceleration Rotation Attitude. Each sensor is attached to the each ankle (left, right) and one on the stomach and dancers perform and each gesture of the dancer shapes its own music.

To obtain position information of the dancer in space, the stage is observed by a depth-camera located in front center of the stage at floor level. The use of stage is implemented using a map zones overlaid on the stage. Each zone has a radial sensitivity curve that rises from its edge to the center.

 

Video: http://mgm.zhdk.ch/?page_id=1406

Relevance

I haven’t mentioned everything here, but this article deals with deeper psychological and technical ways to understand how performance on the stage to unite  the movements (dancers), music (musicians) and audience.

Since our project is about stage performance and interaction this article might be helpful to see how this project dealt with the depth-camera for the performance on the stage.

 

The Challenges of Interactive Dance: An Overview and Case Study

Source: Siegel, W. and Jacobsen, J., 1998. „The challenges of interactive dance: An overview and case study“. Computer Music Journal, Winter 1998, 22. Jg., Nr. 4, S. 29-43.

Summary: The article centres around an interactive dance performance trying to combine the medium of dance and that of music. Goal was to create a system capable of letting the dancer take an active (live) roll in the composition of the musical connotation, which then later succeeded. This was done by designing a sensor-fit, consisting out of eight stretch sensors attached to the dancers’ main body joints. Hindrance were various factors starting at the choice of the right Hardware and software and their implementation and lastly ending in the needed modification of the actual performance.

The article can be separated into four main parts. First being the definition of interaction and what was tried to be established by the project participants. The second part, which is the creation of the later final product. Here in particular the at the time available hardware and software and its choice in use. This proved rather difficult, for the current options had to fulfil a dozen and one requirements stretching from cost to comfort and robustness, which already on their own seem hard to combine. This part lastly finds closure with the actual composition of the performance and what had to be considered for the wanted result. The third part is the performance itself, being subsect able into four parts and the final use of the sensors and developed software. This section brings forth the entire complexity of the project since whilst not regarding the knowledge in one’s own expertize, but the combination of the different divisions such as scientists, dancers, composers and choreographers it proved difficult. The fourth and last part of the article covers the evaluation and resulting conclusion.

Relevance:  Yes, the article is from 1998 and with that historic in the field but it shows the approach and realization of such a project to such an extent, one cannot deny its value when one self is trying to approach such a project. It clearly shows the entire approach, making and faced difficulties with their solution.

Since our project most defiantly will find similarities to the one presented in the text, it is a great insight for us in an already professionally finished project and to what can be achieved.

Computers and Live Performance: Interactive or Interference?

Source: Sam Hamm. 2003. „Computers and Live Performance: Interactive, or Interference?“ Society of Composers, Inc., Region IV Conference, Stetson University, DeLand, FL, November 8, 2003.

Summary: Sam  Hamm, the author of this article, tries to establish a basic understanding of what to expect when combining live performance and computers. For a common ground of reasoning, he first elaborates what interaction and interference is. He specifies first, as the performer „output“ results in the computer „input“ and the other way around. Second is to be interpreted as disturbance in any form. For further sake, he defines that a live performance can not exist without one oft the two components.
The resulting interpretation of these definitions would be that lowering the interference would enhance the performance and the performers freedom at task, due to him/her being able to more naturally interact with the given system.
Sam Hamm then goes on listing the advantages and disadvantages of a interactive performance design, saying it would improve the current state of art (Computer assisted performances) as in giving the performer a greater feedback, allowing for an effective logistical setup, lowering rehearsal limitations and opening new paths of creation and perception. On the other side it greatens already existing poles of interference, such as increasing the need of monitoring due to more technical outsourced work and lastly the need of first learning such a interactive system for future use.

Relevance for our project: Since the article primarily focuses on the basic understanding of what one might encounter and having to expect when working on an interactive performance, it is quite enlightening. Alot of the said, even when old, can be converted to our project and should act as a guidline for future decision making.  The other side of the „alot“ does not seem to take measure nowerdays and can be treated as a lesson in history.