報(bào)告題目: Social Multimedia as Sensors
報(bào)告人: 羅杰波(Jiebo Luo) 教授,F(xiàn)ellow of the IEEE, SPIE and IAPR
時(shí) 間: 2014年12月19日(周五)14:30-16:00
地 點(diǎn): 三牌樓校區(qū)綜合科研樓1712室
主辦單位:通信與信息工程學(xué)院、江蘇省圖像處理與圖像通信重點(diǎn)實(shí)驗(yàn)室、科技處
報(bào)告人簡(jiǎn)介:

羅杰波教授目前就職于美國(guó)羅徹斯特大學(xué) (University of Rochester, USA) 計(jì)算機(jī)科學(xué)系,是IEEE、SPIE和IAPR等國(guó)際著名學(xué)會(huì)的會(huì)士(Fellow),圖像處理、計(jì)算機(jī)視覺(jué)、機(jī)器學(xué)習(xí)等領(lǐng)域著名國(guó)際學(xué)者。羅杰波教授曾于“柯達(dá)實(shí)驗(yàn)室”從事研究長(zhǎng)達(dá)十五年,并擔(dān)任該實(shí)驗(yàn)室首席科學(xué)家。羅杰波教授是國(guó)際頂級(jí)會(huì)議ACM Multimedia 2010、CVPR 2012大會(huì)共同主席,Journal of Multimedia主編,并擔(dān)任IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI)、IEEE Transactions on Multimedia(TMM)、IEEE Transactions on Circuits and Systems for Video Technology(CSVT)、Pattern Recognition(PR)、Machine Vision and Applications(MVA)和Journal of Electronic Imaging(JEI)等國(guó)際頂尖學(xué)術(shù)期刊編委會(huì)成員。羅杰波教授的研究涉及圖像處理、計(jì)算機(jī)視覺(jué)、機(jī)器學(xué)習(xí)、數(shù)據(jù)挖掘、醫(yī)學(xué)影像分析、普適性計(jì)算等多個(gè)前沿領(lǐng)域,發(fā)表超過(guò)兩百篇學(xué)術(shù)論文,持有超過(guò)七十項(xiàng)美國(guó)專(zhuān)利。近年來(lái),羅杰波教授在社交多媒體研究及其社會(huì)應(yīng)用中做出了巨大的貢獻(xiàn)。
報(bào)告摘要:
Increasingly rich and large-scale social multimedia data (including text, images, audio, video) are being generated and posted to social networking and media sharing websites. Researchers from multidisciplinary areas are developing methods for processing social multimedia and employing such rich multi-modality data for various applications. We present a few recent advances in the area of using social multimedia as sensors. Specifically, this tutorial consists of two parts. The first part is on sensing users from heterogeneous, complex, and dynamic social multimedia. We will introduce four elements in the loop of sensing users' user profile, context, multi-modal input, and interactivity. In particular, we will address estimation of user profile, accurate and comprehensive estimation of a mobile user's geo-context from phone-captured photos, and personalized mobile recommendation based on context information. The second part is about sensing social activities from user-generated social multimedia contents, including suggesting suitable social groups from a user's personal photo collection, producing popular and diverse tourism routes from crowd-sourced geo-tagged photos, extracting user sentiment from both textual and visual information in social media, and forecasting election outcome based on image sharing activities and image sentiments. In addition, we will also share interesting findings regarding cultural differences in social multimedia between US and China, as well as thoughts on current challenges and future directions.
