Abstract - The goal of IMEV 2014 is to identify and promote novel computer vision algorithms, systems and frameworks, which are particularly suitable for intelligent and interactive information processing on mobile and wearable computing platforms. This is the 3rd workshop in the series: this year, we expand the scope to include egocentric vision, an exciting emerging topic in computer vision. IMEV 2014 aims to bring together researchers to present latest developments and technical solutions in the domain of intelligent mobile and egocentric vision, including novel algorithms, applications, and systems.

¡@

The decisions for IMEV 2014 papers have been made.

The list of accepted papers can be found here (link)

¡@

Mobile Vision - In the domain of mobile vision, we encourage researchers and engineers to propose interdisciplinary work, which integrates different types of visual information and additional sensors such as GPS, accelerometers and gyroscopes, for novel applications and services on mobile computing platforms. Mobile vision has also received increasing interest from MPEG, the international standards body. Contributions related to ongoing computer vision standards in MPEG: Compact Descriptors for Visual Search (CDVS), and its extension to video, Compact Descriptors for Video Analysis (CDVA) are also welcome.

¡@
Research topics relevant to, but not limited to, the following areas are welcome:

Feature extraction on mobile devices
Motion analysis and recovery for mobile cameras
Gesture/object/location recognition with mobile cameras
3D mobile vision
Augmented reality on mobile devices
Human computer interaction with mobile devices
Computer vision applications on hand-held devices
Indexing and retrieval of images and videos for mobile devices
Multi-sensor integration for mobile vision
Hardware and embedded systems for mobile vision
Mobile vision incorporated with robot vision
Related MPEG standards- CDVS (Compact Descriptors for Visual Search) and CDVA (Compact Descriptors for Video Analysis)
Other topics related to mobile vision.
¡@ ¡@
¡@ ¡@

Egocentric Vision - In the context of egocentric vision, devices like Google Glass, allow capturing and recording of rich visual data from an egocentric perspective. Wearable devices provide a unique opportunity to explore how humans understand and interpret visual input from their eyes. The first-person-view (FPV) observations align with a human¡¦s egocentric perspective of the world around him or her, while most existing computer vision technologies are based on fixed cameras or scenes from selected viewing points by photographers. Devices like Google Glass have the potential to change the way we view and interact with things around us. There is much to be done before wearable devices and their applications become widespread. We encourage researchers to make contributions in wearable visual computing, from the varying perspectives of cognitive science, artificial intelligence, computer vision, and machine learning.

¡@
Research topics relevant to, but not limited to, the following areas are welcome:

Visual feature learning from FPV videos
Egocentric video summarization, life-logging
Social activity analysis from FPV videos
Activity recognition in first-person vision
Eye-gaze tracking & attention modeling
Object recognition & tracking in FPV videos
Scene understanding in first-person video
Human Computer Interaction issues in first-person vision
Privacy issues in first-person video
Other topics related to egocentric vision.
¡@
¡@

Questions - For any questions or comments about this workshop, please contact Dr. Chu-Song Chen (Email: song@iis.sinica.edu.tw).