- Paper submission deadline is extended to April 1, 2014.
Recent years have seen a rapid increase in the size of multimedia collections. Therefore, automatic multimedia content analysis and annotation are needed in order to organize multimedia collections effectively and to assist users in finding multimedia quickly. Conventional content-based multimedia analysis focuses on generic semantic content, such as news or sports. However, many people often watch videos or listen to music in order to satisfy certain emotional needs, it is therefore necessary to tag multimedia based on their affective content. Different from the conventional content-based media analysis, which typically identifies the main event involved in a media, media affective computing is to identify media that can evoke certain emotions on the users who consume the media. Introducing such a personal or human touch into media tagging is expected to produce more rewards on both the users and businesses that create, distribute and host the media. Furthermore, multimedia usually contains information from multiple channels such as human face, body gestures, vocal intonation, etc. Most recently, affective computing in multimedia has been evidenced to be benefited from information fusion of multiple modalities.
The purpose of this workshop is to increase the attention of multimedia society on human-centered multimedia computing, where sensing human affect and emotion plays an essential role. More specifically, we welcome research papers focusing on, but not limited to the following topics:
- Static images
- Facial expression