Categories
5.2 Thesis Final Major Projects and Theis

Week 2 reference research

Title:

Culture and 3D animation A study of how culture and body language affects the perception of animated 3D characters

Dahle, T. (2019) Culture and 3D animation: A study of how culture and body language affects the perception of animated 3D characters

This section discusses culture and body language, including how to express emotions, how culture affects body language, and how to read body language. This section also covers gender, facial expressions, and animation/games.
Culture affects many aspects of people’s lives, including how they think and react to the world around them and how their body language is used and perceived.


Humans use body language to communicate thoughts and emotions with others in social situations.


People from different cultural backgrounds express emotions differently, with people from East Asian countries being more introverted. Universal facial expressions include happiness, sadness, surprise, fear, disgust, and anger, while body language conveys these emotions through head position, hand gestures, feet, and posture.


Body language is an important part of communication, accounting for 65% of message interpretation. Context, grouping, consistency, coherence, culture, and instinctive/learned gestures all impact how body language is used. When interpreting body language, it is important to be aware of cultural differences to ensure successful communication.


Culture affects how people perceive the body language of animated characters, and developers should be aware of cultural differences when creating characters. Body language can be exaggerated or manipulated to achieve a desired effect.


Disney’s 12 Principles of Animation are 2D/3D animation guidelines created by Walt Disney. These principles include Anticipation, Squeeze and Stretch, Follow-Up and Overlapping Actions, Periodization, Slow-In/Slow-Out, Arcs, Assistive Actions, Timing, Exaggeration, Solid Drawing, Attraction, and Pose vs. Pose/Straight Progression.

______________________________________________________________________________________________

Title:

Emotion Capture: Emotionally Expressive Characters for Games

Ennis, C. et al. (2013) Emotion Capture: Emotionally Expressive Characters for Games. Association for Computing Machinery New York, NY

People can identify emotions through body or facial movements alone, but it’s best to combine facial and body movements to create a more expressive character.

Avatars are used in many applications, from entertainment to education, and must be expressive and emotional. Real-time facial motion capture systems can interact with virtual characters to display facial and body emotions.

Our goal is to study how virtual characters express body and facial expressions when expressing emotions through language. We’ll learn whether there are differences in how men and women portray emotions and whether screen size affects how we perceive emotions.

Research shows that humans have a strong ability to recognize the behavior of others and infer higher-level context from point source data alone. Research also shows that humans are susceptible to the emotions conveyed by others.

Avatars can provide further insight into perceptions of mood and personality. They can combine verbal expression, gesture frequency, and gesture display to enhance extroverted sensing.

They used fully captured facial and body movements to study how emotions are perceived through natural, unobtrusive activities. They anticipated that female and male characters might convey different messages through body and facial movements. They used motion capture to animate the virtual characters using a Vicon optical system with 21 cameras and avoided typical motion capture artifacts by optimizing the tag set and grabbing the number and correct position of the cameras. They recruited four trained female and four male actors and asked them to perform a series of phrases describing four basic emotions: anger, fear, happiness, and sadness. No actors’ voices were used in the experiment.

To avoid ambiguity in movement and form, two avatars, a male actor and a female actor, were used. Use a bone-based approach to drive body and facial geometry. They showed participants video clips of eight actors expressing four emotions in three blocks. At the end of each segment, participants answer two questions.

They recruited 14 participants via a university email list, and they completed the experiment within 30 minutes. They received book tokens as compensation for their time.

They found that anger and sadness were most easily recognized by the whole limb, then by the stem alone, and finally by the face alone, while fear was equally effective by the entire stalk and by the limb alone.

Leave a Reply

Your email address will not be published. Required fields are marked *