Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING

Paper Details

Authors:
Xian Yu, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, Xinghao Ding
Submitted On:
12 April 2018 - 9:35pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Underwater Images Recognition

(37 downloads)

Keywords

Subscribe

[1] Xian Yu, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, Xinghao Ding, "MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2542. Accessed: Jul. 16, 2018.
@article{2542-18,
url = {http://sigport.org/2542},
author = {Xian Yu; Xiangrui Xing; Han Zheng; Xueyang Fu; Yue Huang; Xinghao Ding },
publisher = {IEEE SigPort},
title = {MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING},
year = {2018} }
TY - EJOUR
T1 - MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING
AU - Xian Yu; Xiangrui Xing; Han Zheng; Xueyang Fu; Yue Huang; Xinghao Ding
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2542
ER -
Xian Yu, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, Xinghao Ding. (2018). MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING. IEEE SigPort. http://sigport.org/2542
Xian Yu, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, Xinghao Ding, 2018. MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING. Available at: http://sigport.org/2542.
Xian Yu, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, Xinghao Ding. (2018). "MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING." Web.
1. Xian Yu, Xiangrui Xing, Han Zheng, Xueyang Fu, Yue Huang, Xinghao Ding. MAN-MADE OBJECT RECOGNITION FROM UNDERWATER OPTICAL IMAGES USING DEEP LEARNING AND TRANSFER LEARNING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2542

IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION

Paper Details

Authors:
Jian Ma, Kun Li, Yahong Han, Pufeng Du, Jingyu Yang
Submitted On:
12 April 2018 - 9:09pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

IcasspPoster2.pdf

(33 downloads)

Keywords

Subscribe

[1] Jian Ma, Kun Li, Yahong Han, Pufeng Du, Jingyu Yang, "IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2535. Accessed: Jul. 16, 2018.
@article{2535-18,
url = {http://sigport.org/2535},
author = {Jian Ma; Kun Li; Yahong Han; Pufeng Du; Jingyu Yang },
publisher = {IEEE SigPort},
title = {IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION},
year = {2018} }
TY - EJOUR
T1 - IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION
AU - Jian Ma; Kun Li; Yahong Han; Pufeng Du; Jingyu Yang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2535
ER -
Jian Ma, Kun Li, Yahong Han, Pufeng Du, Jingyu Yang. (2018). IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION. IEEE SigPort. http://sigport.org/2535
Jian Ma, Kun Li, Yahong Han, Pufeng Du, Jingyu Yang, 2018. IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION. Available at: http://sigport.org/2535.
Jian Ma, Kun Li, Yahong Han, Pufeng Du, Jingyu Yang. (2018). "IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION." Web.
1. Jian Ma, Kun Li, Yahong Han, Pufeng Du, Jingyu Yang. IMAGE-BASED PM2.5 ESTIMATION AND ITS APPLICATION ON DEPTH ESTIMATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2535

Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework

Paper Details

Authors:
Wu Liu, Huadong Ma
Submitted On:
12 April 2018 - 9:06pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_ICASSP2018_MinghuiZhang.pdf

(39 downloads)

Keywords

Additional Categories

Subscribe

[1] Wu Liu, Huadong Ma, "Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2533. Accessed: Jul. 16, 2018.
@article{2533-18,
url = {http://sigport.org/2533},
author = {Wu Liu; Huadong Ma },
publisher = {IEEE SigPort},
title = {Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework},
year = {2018} }
TY - EJOUR
T1 - Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework
AU - Wu Liu; Huadong Ma
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2533
ER -
Wu Liu, Huadong Ma. (2018). Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework. IEEE SigPort. http://sigport.org/2533
Wu Liu, Huadong Ma, 2018. Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework. Available at: http://sigport.org/2533.
Wu Liu, Huadong Ma. (2018). "Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework." Web.
1. Wu Liu, Huadong Ma. Joint License Plate Super-Resolution and Recognition in One Multi-Task GAN Framework [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2533

Edge-aware Context Encoder for Image Inpainting


We present Edge-aware Context Encoder (E-CE): an image inpainting model which takes scene structure and context into account. Unlike previous CE which predicts the missing regions using context from entire image, E-CE learns to recover the texture according to edge structures, attempting to avoid context blending across boundaries. In our approach, edges are extracted from the masked image, and completed by a full-convolutional network. The completed edge map together with the original masked image are then input into the modified CE network to predict the missing region.

Paper Details

Authors:
Liang Liao, Ruimin Hu, Jing Xiao, Zhongyuan Wang
Submitted On:
12 April 2018 - 9:37pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster: Edge-aware Context Encoder for Image Inpainting

(35 downloads)

Keywords

Subscribe

[1] Liang Liao, Ruimin Hu, Jing Xiao, Zhongyuan Wang, "Edge-aware Context Encoder for Image Inpainting", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2532. Accessed: Jul. 16, 2018.
@article{2532-18,
url = {http://sigport.org/2532},
author = {Liang Liao; Ruimin Hu; Jing Xiao; Zhongyuan Wang },
publisher = {IEEE SigPort},
title = {Edge-aware Context Encoder for Image Inpainting},
year = {2018} }
TY - EJOUR
T1 - Edge-aware Context Encoder for Image Inpainting
AU - Liang Liao; Ruimin Hu; Jing Xiao; Zhongyuan Wang
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2532
ER -
Liang Liao, Ruimin Hu, Jing Xiao, Zhongyuan Wang. (2018). Edge-aware Context Encoder for Image Inpainting. IEEE SigPort. http://sigport.org/2532
Liang Liao, Ruimin Hu, Jing Xiao, Zhongyuan Wang, 2018. Edge-aware Context Encoder for Image Inpainting. Available at: http://sigport.org/2532.
Liang Liao, Ruimin Hu, Jing Xiao, Zhongyuan Wang. (2018). "Edge-aware Context Encoder for Image Inpainting." Web.
1. Liang Liao, Ruimin Hu, Jing Xiao, Zhongyuan Wang. Edge-aware Context Encoder for Image Inpainting [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2532

Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition


We propose a spatiotemporal attention based deep neural networks for dimensional emotion recognition in facial videos. To learn the spatiotemporal attention that selectively focuses on emotional sailient parts within facial videos, we formulate the spatiotemporal encoder-decoder network using Convolutional LSTM (ConvLSTM)modules, which can be learned implicitly without any pixel-level annotations. By leveraging the spatiotemporal attention, we also formulate the 3D convolutional neural networks (3D-CNNs) to robustly recognize the dimensional emotion in facial videos.

Paper Details

Authors:
Sunok Kim, Seungryong Kim, Kwanghoon Sohn
Submitted On:
12 April 2018 - 8:08pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2018_ICASSP_jylee.pdf

(49 downloads)

Keywords

Subscribe

[1] Sunok Kim, Seungryong Kim, Kwanghoon Sohn, "Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2526. Accessed: Jul. 16, 2018.
@article{2526-18,
url = {http://sigport.org/2526},
author = {Sunok Kim; Seungryong Kim; Kwanghoon Sohn },
publisher = {IEEE SigPort},
title = {Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition},
year = {2018} }
TY - EJOUR
T1 - Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition
AU - Sunok Kim; Seungryong Kim; Kwanghoon Sohn
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2526
ER -
Sunok Kim, Seungryong Kim, Kwanghoon Sohn. (2018). Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition. IEEE SigPort. http://sigport.org/2526
Sunok Kim, Seungryong Kim, Kwanghoon Sohn, 2018. Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition. Available at: http://sigport.org/2526.
Sunok Kim, Seungryong Kim, Kwanghoon Sohn. (2018). "Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition." Web.
1. Sunok Kim, Seungryong Kim, Kwanghoon Sohn. Spatiotemporal Attention Based Deep Neural Networks for Emotion Recognition [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2526

SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION


In this study, we propose an efficient stereo matching method which estimates sparse disparities using global phase only correlation (POC). Conventionally, cost functions are to be calculated for all disparity candidates and the associated computational cost has been impediment in achieving a real-time performance. Therefore, we consider to use fullimage 2D phase only correlation (FIPOC) for detecting the valid disparity candidates. This will require comparatively fewer calculations for the same number of disparity.

Paper Details

Authors:
Takeshi Shimada, Masayuki Ikebe, Prasoon Ambalathankandy, Shinya Takamaeda-Yamazaki, Masato Motomura, Tetsuya Asai
Submitted On:
12 April 2018 - 8:07pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Shimada_4399

(31 downloads)

Keywords

Subscribe

[1] Takeshi Shimada, Masayuki Ikebe, Prasoon Ambalathankandy, Shinya Takamaeda-Yamazaki, Masato Motomura, Tetsuya Asai, "SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2525. Accessed: Jul. 16, 2018.
@article{2525-18,
url = {http://sigport.org/2525},
author = {Takeshi Shimada; Masayuki Ikebe; Prasoon Ambalathankandy; Shinya Takamaeda-Yamazaki; Masato Motomura; Tetsuya Asai },
publisher = {IEEE SigPort},
title = {SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION},
year = {2018} }
TY - EJOUR
T1 - SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION
AU - Takeshi Shimada; Masayuki Ikebe; Prasoon Ambalathankandy; Shinya Takamaeda-Yamazaki; Masato Motomura; Tetsuya Asai
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2525
ER -
Takeshi Shimada, Masayuki Ikebe, Prasoon Ambalathankandy, Shinya Takamaeda-Yamazaki, Masato Motomura, Tetsuya Asai. (2018). SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION. IEEE SigPort. http://sigport.org/2525
Takeshi Shimada, Masayuki Ikebe, Prasoon Ambalathankandy, Shinya Takamaeda-Yamazaki, Masato Motomura, Tetsuya Asai, 2018. SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION. Available at: http://sigport.org/2525.
Takeshi Shimada, Masayuki Ikebe, Prasoon Ambalathankandy, Shinya Takamaeda-Yamazaki, Masato Motomura, Tetsuya Asai. (2018). "SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION." Web.
1. Takeshi Shimada, Masayuki Ikebe, Prasoon Ambalathankandy, Shinya Takamaeda-Yamazaki, Masato Motomura, Tetsuya Asai. SPARSE DISPARITY ESTIMATION USING GLOBAL PHASE ONLY CORRELATION FOR STEREO MATCHING ACCELERATION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2525

FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING


Performance-cost trade-offs in video object tracking tasks for long video sequences is investigated. A novel frame-subsampled, drift-resilient (FSDR) video object tracking algorithm is presented that would achieve desired tracking accuracy while dramatically reducing computing time by processing only sub-sampled video frames. A new pattern matching score metric is proposed to estimate the probability of drifting. A drift-recovery procedure is developed to enable the algorithm to recover from a drift situation and resume accurate tracking.

Paper Details

Authors:
Xuan Wang, Yuhen Hu, Robert G. Radwin, John D. Lee
Submitted On:
12 April 2018 - 4:34pm
Short Link:
Type:
Event:
Presenter's Name:

Document Files

ICASSP2018_poster_figuresInEps.pptx

(35 downloads)

Keywords

Subscribe

[1] Xuan Wang, Yuhen Hu, Robert G. Radwin, John D. Lee, "FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2496. Accessed: Jul. 16, 2018.
@article{2496-18,
url = {http://sigport.org/2496},
author = {Xuan Wang; Yuhen Hu; Robert G. Radwin; John D. Lee },
publisher = {IEEE SigPort},
title = {FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING},
year = {2018} }
TY - EJOUR
T1 - FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING
AU - Xuan Wang; Yuhen Hu; Robert G. Radwin; John D. Lee
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2496
ER -
Xuan Wang, Yuhen Hu, Robert G. Radwin, John D. Lee. (2018). FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING. IEEE SigPort. http://sigport.org/2496
Xuan Wang, Yuhen Hu, Robert G. Radwin, John D. Lee, 2018. FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING. Available at: http://sigport.org/2496.
Xuan Wang, Yuhen Hu, Robert G. Radwin, John D. Lee. (2018). "FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING." Web.
1. Xuan Wang, Yuhen Hu, Robert G. Radwin, John D. Lee. FRAME-SUBSAMPLED, DRIFT-RESILIENT VIDEO OBJECT TRACKING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2496

Learning convolutional sparse coding


We propose a convolutional recurrent sparse auto-encoder
model. The model consists of a sparse encoder, which is a
convolutional extension of the learned ISTA (LISTA) method,
and a linear convolutional decoder. Our strategy offers a simple
method for learning a task-driven sparse convolutional
dictionary (CD), and producing an approximate convolutional
sparse code (CSC) over the learned dictionary. We trained
the model to minimize reconstruction loss via gradient decent
with back-propagation and have achieved competitve

Paper Details

Authors:
Hillel Sreter, Raja Giryes
Submitted On:
20 April 2018 - 12:42pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

CSC - ICASSP.pptx

(31 downloads)

CSC - ICASSP.pdf

(38 downloads)

Keywords

Subscribe

[1] Hillel Sreter, Raja Giryes, "Learning convolutional sparse coding ", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2478. Accessed: Jul. 16, 2018.
@article{2478-18,
url = {http://sigport.org/2478},
author = {Hillel Sreter; Raja Giryes },
publisher = {IEEE SigPort},
title = {Learning convolutional sparse coding },
year = {2018} }
TY - EJOUR
T1 - Learning convolutional sparse coding
AU - Hillel Sreter; Raja Giryes
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2478
ER -
Hillel Sreter, Raja Giryes. (2018). Learning convolutional sparse coding . IEEE SigPort. http://sigport.org/2478
Hillel Sreter, Raja Giryes, 2018. Learning convolutional sparse coding . Available at: http://sigport.org/2478.
Hillel Sreter, Raja Giryes. (2018). "Learning convolutional sparse coding ." Web.
1. Hillel Sreter, Raja Giryes. Learning convolutional sparse coding [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2478

Document Quality Estimation using Spatial Frequency Response


The current Document Image Quality Assessment (DIQA) algorithms directly relate the Optical Character Recognition (OCR) accuracies with the quality of the document to build supervised learning frameworks. This direct correlation has two major limitations: (a) OCR may be affected by factors independent of the quality of the capture and (b) it cannot account for blur variations within an image. An alternate possibility is to quantify the quality of capture using human judgement, however, it is subjective and prone to error.

Paper Details

Authors:
Pranjal Kumar Rai, Sajal Maheshwari, Vineet Gandhi
Submitted On:
13 April 2018 - 2:24am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

rai_ICASSP.pdf

(34 downloads)

Keywords

Subscribe

[1] Pranjal Kumar Rai, Sajal Maheshwari, Vineet Gandhi, "Document Quality Estimation using Spatial Frequency Response", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2469. Accessed: Jul. 16, 2018.
@article{2469-18,
url = {http://sigport.org/2469},
author = {Pranjal Kumar Rai; Sajal Maheshwari; Vineet Gandhi },
publisher = {IEEE SigPort},
title = {Document Quality Estimation using Spatial Frequency Response},
year = {2018} }
TY - EJOUR
T1 - Document Quality Estimation using Spatial Frequency Response
AU - Pranjal Kumar Rai; Sajal Maheshwari; Vineet Gandhi
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2469
ER -
Pranjal Kumar Rai, Sajal Maheshwari, Vineet Gandhi. (2018). Document Quality Estimation using Spatial Frequency Response. IEEE SigPort. http://sigport.org/2469
Pranjal Kumar Rai, Sajal Maheshwari, Vineet Gandhi, 2018. Document Quality Estimation using Spatial Frequency Response. Available at: http://sigport.org/2469.
Pranjal Kumar Rai, Sajal Maheshwari, Vineet Gandhi. (2018). "Document Quality Estimation using Spatial Frequency Response." Web.
1. Pranjal Kumar Rai, Sajal Maheshwari, Vineet Gandhi. Document Quality Estimation using Spatial Frequency Response [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2469

PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS


A 3-dimensional convolutional neural network is trained on unlabeled ultrasound video to predict an upcoming tongue image from previous ones. The network obtains results superior to those of simpler predictors, and provides a starting point for exploiting the higher-level representation of the tongue learned by the system in a variety of applications in speech research. This work is believed to be the first application of convolutional neural networks to unlabeled ultrasound video for the purpose of predicting tongue movement.

Paper Details

Authors:
Shicheng Chen, Guorui Sheng, Pierre Roussel, Bruce Denby
Submitted On:
12 April 2018 - 1:53pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster

(36 downloads)

Keywords

Subscribe

[1] Shicheng Chen, Guorui Sheng, Pierre Roussel, Bruce Denby, "PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2457. Accessed: Jul. 16, 2018.
@article{2457-18,
url = {http://sigport.org/2457},
author = {Shicheng Chen; Guorui Sheng; Pierre Roussel; Bruce Denby },
publisher = {IEEE SigPort},
title = {PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS},
year = {2018} }
TY - EJOUR
T1 - PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS
AU - Shicheng Chen; Guorui Sheng; Pierre Roussel; Bruce Denby
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2457
ER -
Shicheng Chen, Guorui Sheng, Pierre Roussel, Bruce Denby. (2018). PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS. IEEE SigPort. http://sigport.org/2457
Shicheng Chen, Guorui Sheng, Pierre Roussel, Bruce Denby, 2018. PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS. Available at: http://sigport.org/2457.
Shicheng Chen, Guorui Sheng, Pierre Roussel, Bruce Denby. (2018). "PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS." Web.
1. Shicheng Chen, Guorui Sheng, Pierre Roussel, Bruce Denby. PREDICTING TONGUE MOTION IN UNLABELED ULTRASOUND VIDEO USING 3D CONVOLUTIONAL NEURAL NETWORKS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2457

Pages