Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

Hard Shadows Removal Using An Approximate Illumination Invariant


Hard shadows detection and removal from foreground masks is a challenging step in change detection. This paper gives a simple and effective method to address hard shadows. There are inside portion and boundary portion in hard shadows. Pixel-wise neighborhood ratio is calculated to remove the most of inside shadow points. For the boundaries of shadow regions, we take advantage of color constancy to eliminate the edges of hard shadows and obtain relative accurate objects contours. Then, morphology processing is explored to enhance the integrity of objects.

Paper Details

Authors:
Bingshu Wang, C.L. Philip Chen, Yuyuan Li, Yong Zhao
Submitted On:
20 April 2018 - 1:49am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

BingshuWang_Poster_2018ICASSP.pdf

(32 downloads)

Keywords

Subscribe

[1] Bingshu Wang, C.L. Philip Chen, Yuyuan Li, Yong Zhao, "Hard Shadows Removal Using An Approximate Illumination Invariant ", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2409. Accessed: Jul. 16, 2018.
@article{2409-18,
url = {http://sigport.org/2409},
author = {Bingshu Wang; C.L. Philip Chen; Yuyuan Li; Yong Zhao },
publisher = {IEEE SigPort},
title = {Hard Shadows Removal Using An Approximate Illumination Invariant },
year = {2018} }
TY - EJOUR
T1 - Hard Shadows Removal Using An Approximate Illumination Invariant
AU - Bingshu Wang; C.L. Philip Chen; Yuyuan Li; Yong Zhao
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2409
ER -
Bingshu Wang, C.L. Philip Chen, Yuyuan Li, Yong Zhao. (2018). Hard Shadows Removal Using An Approximate Illumination Invariant . IEEE SigPort. http://sigport.org/2409
Bingshu Wang, C.L. Philip Chen, Yuyuan Li, Yong Zhao, 2018. Hard Shadows Removal Using An Approximate Illumination Invariant . Available at: http://sigport.org/2409.
Bingshu Wang, C.L. Philip Chen, Yuyuan Li, Yong Zhao. (2018). "Hard Shadows Removal Using An Approximate Illumination Invariant ." Web.
1. Bingshu Wang, C.L. Philip Chen, Yuyuan Li, Yong Zhao. Hard Shadows Removal Using An Approximate Illumination Invariant [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2409

3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING

Paper Details

Authors:
Pablo Ruiz, Xiang Huang, Kuan He, Nicola J. Ferrier, Mark Hereld, Alan Selewa, Matthew Daddysman, Norbert Scherer, Oliver Cossairt, Aggelos K. Katsaggelos
Submitted On:
12 April 2018 - 4:33pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster

(31 downloads)

Keywords

Subscribe

[1] Pablo Ruiz, Xiang Huang, Kuan He, Nicola J. Ferrier, Mark Hereld, Alan Selewa, Matthew Daddysman, Norbert Scherer, Oliver Cossairt, Aggelos K. Katsaggelos, "3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/2392. Accessed: Jul. 16, 2018.
@article{2392-18,
url = {http://sigport.org/2392},
author = {Pablo Ruiz; Xiang Huang; Kuan He; Nicola J. Ferrier; Mark Hereld; Alan Selewa; Matthew Daddysman; Norbert Scherer; Oliver Cossairt; Aggelos K. Katsaggelos },
publisher = {IEEE SigPort},
title = {3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING},
year = {2018} }
TY - EJOUR
T1 - 3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING
AU - Pablo Ruiz; Xiang Huang; Kuan He; Nicola J. Ferrier; Mark Hereld; Alan Selewa; Matthew Daddysman; Norbert Scherer; Oliver Cossairt; Aggelos K. Katsaggelos
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/2392
ER -
Pablo Ruiz, Xiang Huang, Kuan He, Nicola J. Ferrier, Mark Hereld, Alan Selewa, Matthew Daddysman, Norbert Scherer, Oliver Cossairt, Aggelos K. Katsaggelos. (2018). 3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING. IEEE SigPort. http://sigport.org/2392
Pablo Ruiz, Xiang Huang, Kuan He, Nicola J. Ferrier, Mark Hereld, Alan Selewa, Matthew Daddysman, Norbert Scherer, Oliver Cossairt, Aggelos K. Katsaggelos, 2018. 3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING. Available at: http://sigport.org/2392.
Pablo Ruiz, Xiang Huang, Kuan He, Nicola J. Ferrier, Mark Hereld, Alan Selewa, Matthew Daddysman, Norbert Scherer, Oliver Cossairt, Aggelos K. Katsaggelos. (2018). "3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING." Web.
1. Pablo Ruiz, Xiang Huang, Kuan He, Nicola J. Ferrier, Mark Hereld, Alan Selewa, Matthew Daddysman, Norbert Scherer, Oliver Cossairt, Aggelos K. Katsaggelos. 3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPE: AXIAL SUPER-RESOLUTION AND MULTIPLE-FRAME PROCESSING [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/2392

Counting Plants Using Deep Learning


In this paper we address the task of counting crop plants in a field using CNNs. The number of plants in an Unmanned Aerial Vehicle (UAV) image of the field is estimated using regression instead of classification. This avoids to need to know (or guess) the maximum expected number of plants. We also describe a method to extract images of sections or "plots" from an orthorectified image of the entire crop field. These images will be used for training and evaluation of the CNN.

Paper Details

Authors:
Yuhao Chen, Christopher Boomsma, Edward Delp
Submitted On:
19 November 2017 - 12:46pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

counting_plants_using_deep_learning_globalsip2017

(186 downloads)

Keywords

Subscribe

[1] Yuhao Chen, Christopher Boomsma, Edward Delp, "Counting Plants Using Deep Learning", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2364. Accessed: Jul. 16, 2018.
@article{2364-17,
url = {http://sigport.org/2364},
author = {Yuhao Chen; Christopher Boomsma; Edward Delp },
publisher = {IEEE SigPort},
title = {Counting Plants Using Deep Learning},
year = {2017} }
TY - EJOUR
T1 - Counting Plants Using Deep Learning
AU - Yuhao Chen; Christopher Boomsma; Edward Delp
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2364
ER -
Yuhao Chen, Christopher Boomsma, Edward Delp. (2017). Counting Plants Using Deep Learning. IEEE SigPort. http://sigport.org/2364
Yuhao Chen, Christopher Boomsma, Edward Delp, 2017. Counting Plants Using Deep Learning. Available at: http://sigport.org/2364.
Yuhao Chen, Christopher Boomsma, Edward Delp. (2017). "Counting Plants Using Deep Learning." Web.
1. Yuhao Chen, Christopher Boomsma, Edward Delp. Counting Plants Using Deep Learning [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2364

Numerical differentiation of noisy, nonsmooth, multidimensional data


We consider the problem of differentiating a multivariable function specified by noisy data. Following previous work for the single-variable case, we regularize the differentiation process, by formulating it as an inverse problem with an integration operator as the forward model. Total-variation regularization avoids the noise amplification of finite-difference methods, while allowing for discontinuous solutions. Unlike the single-variable case, we use an alternating directions, method of multipliers algorithm to provide greater efficiency for large problems.

chartrand.pdf

PDF icon chartrand.pdf (133 downloads)

Paper Details

Authors:
Rick Chartrand
Submitted On:
15 November 2017 - 7:57am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

chartrand.pdf

(133 downloads)

Keywords

Subscribe

[1] Rick Chartrand, "Numerical differentiation of noisy, nonsmooth, multidimensional data", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2356. Accessed: Jul. 16, 2018.
@article{2356-17,
url = {http://sigport.org/2356},
author = {Rick Chartrand },
publisher = {IEEE SigPort},
title = {Numerical differentiation of noisy, nonsmooth, multidimensional data},
year = {2017} }
TY - EJOUR
T1 - Numerical differentiation of noisy, nonsmooth, multidimensional data
AU - Rick Chartrand
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2356
ER -
Rick Chartrand. (2017). Numerical differentiation of noisy, nonsmooth, multidimensional data. IEEE SigPort. http://sigport.org/2356
Rick Chartrand, 2017. Numerical differentiation of noisy, nonsmooth, multidimensional data. Available at: http://sigport.org/2356.
Rick Chartrand. (2017). "Numerical differentiation of noisy, nonsmooth, multidimensional data." Web.
1. Rick Chartrand. Numerical differentiation of noisy, nonsmooth, multidimensional data [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2356

Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides


Scene text detection is a critical prerequisite for many fascinating applications for vision-based intelligent robots. Existing methods detect texts either using the local information only or casting it as a semantic segmentation problem. They tend to produce a large number of false alarms or cannot separate individual words accurately. In this work, we present an elegant segmentation-aided text detection solution that predicts the word-level bounding boxes using an end-to-end trainable deep convolutional neural network.

Paper Details

Authors:
Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo
Submitted On:
12 April 2018 - 5:30pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP17_Oral_Segmentation-aided_Text_Detection

(147 downloads)

Keywords

Additional Categories

Subscribe

[1] Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo, "Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides ", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2354. Accessed: Jul. 16, 2018.
@article{2354-17,
url = {http://sigport.org/2354},
author = {Junting Zhang; Yuewei Na; Siyang Li; C.-C. Jay Kuo },
publisher = {IEEE SigPort},
title = {Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides },
year = {2017} }
TY - EJOUR
T1 - Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides
AU - Junting Zhang; Yuewei Na; Siyang Li; C.-C. Jay Kuo
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2354
ER -
Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo. (2017). Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides . IEEE SigPort. http://sigport.org/2354
Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo, 2017. Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides . Available at: http://sigport.org/2354.
Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo. (2017). "Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides ." Web.
1. Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo. Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2354

Image Error Concealment based on Joint Sparse Representation and Non-local Similarity


In this paper, an image error concealment method based on joint local sparse representation and non-local similarity is proposed. The proposed method obtains an optimal sparse representation of an image patch, including missing pixels and known neighboring pixels for recovery purpose. At first, a pair of dictionary and a mapping function are simultaneously learned offline from a training data set.

AliAkbari.pdf

PDF icon AliAkbari.pdf (127 downloads)

Paper Details

Authors:
Ali Akbari, Maria Trocan, Bertrand Granado
Submitted On:
13 November 2017 - 9:45pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AliAkbari.pdf

(127 downloads)

Keywords

Subscribe

[1] Ali Akbari, Maria Trocan, Bertrand Granado, "Image Error Concealment based on Joint Sparse Representation and Non-local Similarity", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2347. Accessed: Jul. 16, 2018.
@article{2347-17,
url = {http://sigport.org/2347},
author = {Ali Akbari; Maria Trocan; Bertrand Granado },
publisher = {IEEE SigPort},
title = {Image Error Concealment based on Joint Sparse Representation and Non-local Similarity},
year = {2017} }
TY - EJOUR
T1 - Image Error Concealment based on Joint Sparse Representation and Non-local Similarity
AU - Ali Akbari; Maria Trocan; Bertrand Granado
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2347
ER -
Ali Akbari, Maria Trocan, Bertrand Granado. (2017). Image Error Concealment based on Joint Sparse Representation and Non-local Similarity. IEEE SigPort. http://sigport.org/2347
Ali Akbari, Maria Trocan, Bertrand Granado, 2017. Image Error Concealment based on Joint Sparse Representation and Non-local Similarity. Available at: http://sigport.org/2347.
Ali Akbari, Maria Trocan, Bertrand Granado. (2017). "Image Error Concealment based on Joint Sparse Representation and Non-local Similarity." Web.
1. Ali Akbari, Maria Trocan, Bertrand Granado. Image Error Concealment based on Joint Sparse Representation and Non-local Similarity [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2347

GLOBALSIP presentation


Target re-identification across non-overlapping camera views is a challenging task due to variations in target appearance, illumination, viewpoint and intrinsic parameters of cameras. Brightness transfer function (BTF) was introduced for inter-camera color calibration, and to improve the performance of target re-identification methods. There have been several works based on BTFs, more specifically using weighted BTFs (WBTF), cumulative BTF (CBTF) and mean BTF (MBTF). In this paper, we present a novel method to model the ap-pearance variation across different camera views.

Paper Details

Authors:
Koray Ozcan, Senem Velipasalar
Submitted On:
13 November 2017 - 9:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GLOBALSIP_2017.pdf

(120 downloads)

Keywords

Additional Categories

Subscribe

[1] Koray Ozcan, Senem Velipasalar, "GLOBALSIP presentation ", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2336. Accessed: Jul. 16, 2018.
@article{2336-17,
url = {http://sigport.org/2336},
author = {Koray Ozcan; Senem Velipasalar },
publisher = {IEEE SigPort},
title = {GLOBALSIP presentation },
year = {2017} }
TY - EJOUR
T1 - GLOBALSIP presentation
AU - Koray Ozcan; Senem Velipasalar
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2336
ER -
Koray Ozcan, Senem Velipasalar. (2017). GLOBALSIP presentation . IEEE SigPort. http://sigport.org/2336
Koray Ozcan, Senem Velipasalar, 2017. GLOBALSIP presentation . Available at: http://sigport.org/2336.
Koray Ozcan, Senem Velipasalar. (2017). "GLOBALSIP presentation ." Web.
1. Koray Ozcan, Senem Velipasalar. GLOBALSIP presentation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2336

MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT


We present a novel No-Reference (NR) video quality assessment (VQA) algorithm that operates on the sparse represent- ation coefficients of local spatio-temporal (video) volumes. Our work is motivated by the observation that the primary visual cortex adopts a sparse coding strategy to represent visual stimulus. We use the popular K-SVD algorithm to construct spatio-temporal dictionary to sparsely represent local spatio-temporal volumes of natural videos.

Paper Details

Authors:
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya
Submitted On:
12 November 2017 - 9:24am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

Slides for paper 1590 GlobalSIP 2017

(130 downloads)

Keywords

Subscribe

[1] Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya, "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2315. Accessed: Jul. 16, 2018.
@article{2315-17,
url = {http://sigport.org/2315},
author = {Muhammed Shabeer P;. Saurabhchand Bhati; Sumohana S. Channappayya },
publisher = {IEEE SigPort},
title = {MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT},
year = {2017} }
TY - EJOUR
T1 - MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT
AU - Muhammed Shabeer P;. Saurabhchand Bhati; Sumohana S. Channappayya
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2315
ER -
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya. (2017). MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. IEEE SigPort. http://sigport.org/2315
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya, 2017. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. Available at: http://sigport.org/2315.
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya. (2017). "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT." Web.
1. Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2315

Hand Segmentation for Hand-Object Interaction from Depth Map

Paper Details

Authors:
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen
Submitted On:
12 November 2017 - 7:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP_byeong.pdf

(120 downloads)

Keywords

Subscribe

[1] Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen, "Hand Segmentation for Hand-Object Interaction from Depth Map", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2314. Accessed: Jul. 16, 2018.
@article{2314-17,
url = {http://sigport.org/2314},
author = {Byeongkeun Kang; Kar-Han Tan; Nan Jiang; Hung-Shuo Tai; Daniel Tretter; Truong Nguyen },
publisher = {IEEE SigPort},
title = {Hand Segmentation for Hand-Object Interaction from Depth Map},
year = {2017} }
TY - EJOUR
T1 - Hand Segmentation for Hand-Object Interaction from Depth Map
AU - Byeongkeun Kang; Kar-Han Tan; Nan Jiang; Hung-Shuo Tai; Daniel Tretter; Truong Nguyen
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2314
ER -
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen. (2017). Hand Segmentation for Hand-Object Interaction from Depth Map. IEEE SigPort. http://sigport.org/2314
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen, 2017. Hand Segmentation for Hand-Object Interaction from Depth Map. Available at: http://sigport.org/2314.
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen. (2017). "Hand Segmentation for Hand-Object Interaction from Depth Map." Web.
1. Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen. Hand Segmentation for Hand-Object Interaction from Depth Map [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2314

MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT


We present a novel No-Reference (FR) video quality assessment
(VQA) algorithm that operates on the sparse representation
coefficients of local spatio-temporal (video) volumes.
Our work is motivated by the observation that the primary
visual cortex adopts a sparse coding strategy to represent
visual stimulus. We use the popular K-SVD algorithm to construct
spatio-temporal dictionaries to sparsely represent local
spatio-temporal volumes of natural videos. We empirically
demonstrate that the histogram of the sparse representations

Paper Details

Authors:
Saurabhchand Bhati
Submitted On:
12 November 2017 - 7:49am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Globalsip_2017_Paper#1590_muhammed_shabeer.pdf

(100 downloads)

Keywords

Subscribe

[1] Saurabhchand Bhati, "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2313. Accessed: Jul. 16, 2018.
@article{2313-17,
url = {http://sigport.org/2313},
author = {Saurabhchand Bhati },
publisher = {IEEE SigPort},
title = {MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT},
year = {2017} }
TY - EJOUR
T1 - MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT
AU - Saurabhchand Bhati
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2313
ER -
Saurabhchand Bhati. (2017). MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. IEEE SigPort. http://sigport.org/2313
Saurabhchand Bhati, 2017. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. Available at: http://sigport.org/2313.
Saurabhchand Bhati. (2017). "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT." Web.
1. Saurabhchand Bhati. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2313

Pages