Special Sessions

In the submission page, authors should indicate the special session as their main topic so that their papers are correctly assigned.


Call for special sessions is now over, but is still available as a reference here.

You can find printer friendly version of special sessions here.

Utilising Big Unlabelled and Unmatched Data for Affective Computing

Organizers: Hesam Sagha, Zixing Zhang, Florian Metze, Björn Schuller

There has been lots of research toward affect recognition through different modalities such as speech, video, and text. Despite these great efforts, the performed analyses are often limited to small collected datasets which consequently makes generated models barely generalisable to other recording scenarios. This lack of `big' labelled data for affective computing hampers creating deep models, which have proved their substantial effectiveness, so far, mostly in related fields such as speech and video recognition. Thanks to the popularity of social multimedia, collecting audiovisual and textual data has become a somewhat easy task. Nonetheless, labeling such data demands a huge amount of (expert) human work, which can be expensive and time-consuming. Additionally, collected data may not have high quality and therefore, may not be sufficiently reliable to be used for training a model. Furthermore, collected data from different sources may be highly dissimilar, which can also lead to performance degradation. Therefore, in this special session, we seek approaches that aim to increase the number of reliable labelled data with less human effort as well as to match data distributions between labelled and un- or partially-labelled corpora. This will be a crucial step to lead Affective Computing to industrial level and bring related everyday applications into real life. For further details see https://sites.google.com/view/acii17ubuudac/accueil

Topics (indication, not limited to):

  • semi-supervised learning and active learning
  • zero resource technologies, as unsupervised learning

  • transfer learning for domain/model adaptation

  • using weak labels and co-training

  • crowdsourcing for collecting and annotation large-scale data

  • affective data augmentation and synthesis

  • reinforcement learning

  • cloud/distributed computing algorithms for big affective data

  • applications (such as cross-language cross-cultural adaptation, cross-modality transfer learning, ...)