南京大学计算机科学与技术系
软件新技术与产业化协同创新中心
摘 要:
With the advent of social media,
watching and filtering of images posted on social media is of current research
interest. People post images and videos on social media to express their
feelings or emotions according to their behaviors and this type of social media
data is increasing day-by-day. This work focuses on classifying person
behavior-oriented social media images, namely, bullying, threatening,
depressed, sarcastic and psychopathic along with extraversion (normal). The
proposed approach first detects faces as a foreground component, and other
information (non-face) as background components to extract context features.
Next, for each foreground and background component, we explore the Hanman
transform to study local variations in the components. Based on Hanman (H)
values, the proposed approach combines the H values of foreground and
background components according to their contributions, which results in two
feature vectors. The two feature vectors are then fused by deriving weights to
generate one feature vector. Furthermore, the feature vector is passed to a CNN
classifier for final classification. Experimental results on different classes
of normal and abnormal images chosen from different social media outlets and
the benchmark dataset show that the proposed approach is effective. In
addition, a comparative study with existing methods on the benchmark dataset, a
6-class dataset and another 10-class dataset show that the proposed approach is
outstanding in terms of scalability and robustness.
报告人简介:
P. Shivakumara is a
Senior Lecturer in Faculty of Computer Science and Information Technology,
University of Malaya, Kuala Lumpur, Malaysia. Previously, he was with the
Department of Computer Science, School of Computing, National University of
Singapore from 2008-2013 as a Research Fellow on video text extraction and
recognition project. He has published more than 150 research papers, including
TPAMI, PR, CVIU, TCSVT, PRL, IVC, PRL, IJDAR, ICCV, ACMMM, MTA, ESWA, etc.
时间:4月19日 10:00-11:00
地点:计算机科学技术楼225室
|