Sorry, you need to enable JavaScript to visit this website.

GENERATING THERMAL HUMAN FACES FOR PHYSIOLOGICAL ASSESSMENT. USING THERMAL SENSOR AUXILIARY LABELS

Citation Author(s):
Edward Raff, Sanjay Purushotham
Submitted by:
Catherine Ordun
Last updated:
23 September 2021 - 5:59pm
Document Type:
Presentation Slides
Document Year:
2021
Event:
Presenters:
Catherine Ordun
Paper Code:
3007
 

Thermal images reveal medically important physiological information about human stress, signs of inflammation, and emotional mood that cannot be seen on visible images. Providing a method to generate thermal faces from visible images would be highly valuable for the telemedicine community in order to show this medical information. To the best of our knowledge, there are limited works on visible-to-thermal (VT) face translation, and many current works go the opposite direction to generate visible faces from thermal surveillance images (TV) for law enforcement applications. As a result, we introduce favtGAN, a VT GAN which uses the pix2pix image translation model with an auxiliary sensor label prediction network for generating thermal faces from visible images. Since most TV methods are trained on only one data source drawn from one thermal sensor, we combine datasets from faces and cityscapes. These combined data from different domains are captured from similar sensors in order to bootstrap the training and transfer learning task, especially valuable because visible-thermal face datasets are limited. Experiments on these combined datasets show that favtGAN demonstrates an increase in SSIM and PSNR scores of generated thermal faces, compared to training on a single face dataset alone.

up
0 users have voted: