2020 PAA Computer Vision and Machine Learning for Healthcare Applications
Special Issue description
Recent advances in technology have boosted the development and release of active assistive living devices, based on wearable and/or non-obtrusive visual and multi-modal signals for e-health and welfare support. These solutions are seamlessly integrated in the environment, such as sensor-based systems installed in elderly people’s homes for ambient monitoring and intelligent visual warning.
In addition, research on ubiquitous computing has favored the implementation of more user-centered applications such as virtual tutoring, coaching agents, physical rehabilitation and psychological therapy systems. To allow these systems to provide features that satisfy the user's requirements, expectations, and acceptance, the tendency is now shifting towards the conception of empathic solutions, tailored to personalized user needs. The new assistive systems must be able to understand user’s behaviors, mood and intentions, and react to them accordingly in real time, as well as detect timely changes in behaviors and health states. Furthermore, such systems are expected to infer the user’s traits, attitudes and psychological profile to deliver a more personalized user-machine interaction. These solutions require advanced computer vision and machine learning techniques, such as facial expression analysis, gaze and pose estimation, and gesture recognition, in addition to behavioral and psychological theories for modeling individual’s profiles. While these tasks are currently obtaining outstanding performances in controlled and prototypical environments (e.g., detection of facial expressions of emotion on static faces), the challenge falls on their integration and application in naturalistic scenarios, where extensive sources of variability (pose, age, behaviors, moods, illumination conditions, dynamic speaking emotional faces, among others) affect the processing of the detected signals.
The Computer Vision and Machine Learning for Healthcare Applications special issue aims to collect latest approaches and findings, as well as to discuss the current challenges of machine learning and computer vision based e-health and welfare applications. The focus is on the employment of single or multi-modal face, gesture and pose analysis. We expect this special issue to increase the visibility and importance of this area, and contribute, in the short term, in pushing the state of the art in the automatic analysis of human behaviors for health and wellbeing applications.
Topics of interest include, but are not limited to:
- Multi-modal integration
- Psychological profiling from (audio)-visual and/or multi-modal data
- Approaches based on psychology behavioral models
- Mobile-based and human-computer interaction applications
- User-understanding in human-computer interaction
- Human behavior analysis for health and well-being support
- Assistive technologies for supporting vulnerable people
- Virtual avatars and coaching
- Physical and psychological therapy systems
- Assistive care
- User acceptance of empathic assistive systems
- Real-time applications
- Datasets
More info on this link.
Submit your manuscript here.