Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

Counting Plants Using Deep Learning


In this paper we address the task of counting crop plants in a field using CNNs. The number of plants in an Unmanned Aerial Vehicle (UAV) image of the field is estimated using regression instead of classification. This avoids to need to know (or guess) the maximum expected number of plants. We also describe a method to extract images of sections or "plots" from an orthorectified image of the entire crop field. These images will be used for training and evaluation of the CNN.

Paper Details

Authors:
Yuhao Chen, Christopher Boomsma, Edward Delp
Submitted On:
19 November 2017 - 12:46pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

counting_plants_using_deep_learning_globalsip2017

(38 downloads)

Keywords

Subscribe

[1] Yuhao Chen, Christopher Boomsma, Edward Delp, "Counting Plants Using Deep Learning", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2364. Accessed: Dec. 16, 2017.
@article{2364-17,
url = {http://sigport.org/2364},
author = {Yuhao Chen; Christopher Boomsma; Edward Delp },
publisher = {IEEE SigPort},
title = {Counting Plants Using Deep Learning},
year = {2017} }
TY - EJOUR
T1 - Counting Plants Using Deep Learning
AU - Yuhao Chen; Christopher Boomsma; Edward Delp
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2364
ER -
Yuhao Chen, Christopher Boomsma, Edward Delp. (2017). Counting Plants Using Deep Learning. IEEE SigPort. http://sigport.org/2364
Yuhao Chen, Christopher Boomsma, Edward Delp, 2017. Counting Plants Using Deep Learning. Available at: http://sigport.org/2364.
Yuhao Chen, Christopher Boomsma, Edward Delp. (2017). "Counting Plants Using Deep Learning." Web.
1. Yuhao Chen, Christopher Boomsma, Edward Delp. Counting Plants Using Deep Learning [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2364

Numerical differentiation of noisy, nonsmooth, multidimensional data


We consider the problem of differentiating a multivariable function specified by noisy data. Following previous work for the single-variable case, we regularize the differentiation process, by formulating it as an inverse problem with an integration operator as the forward model. Total-variation regularization avoids the noise amplification of finite-difference methods, while allowing for discontinuous solutions. Unlike the single-variable case, we use an alternating directions, method of multipliers algorithm to provide greater efficiency for large problems.

chartrand.pdf

PDF icon chartrand.pdf (15 downloads)

Paper Details

Authors:
Rick Chartrand
Submitted On:
15 November 2017 - 7:57am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

chartrand.pdf

(15 downloads)

Keywords

Subscribe

[1] Rick Chartrand, "Numerical differentiation of noisy, nonsmooth, multidimensional data", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2356. Accessed: Dec. 16, 2017.
@article{2356-17,
url = {http://sigport.org/2356},
author = {Rick Chartrand },
publisher = {IEEE SigPort},
title = {Numerical differentiation of noisy, nonsmooth, multidimensional data},
year = {2017} }
TY - EJOUR
T1 - Numerical differentiation of noisy, nonsmooth, multidimensional data
AU - Rick Chartrand
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2356
ER -
Rick Chartrand. (2017). Numerical differentiation of noisy, nonsmooth, multidimensional data. IEEE SigPort. http://sigport.org/2356
Rick Chartrand, 2017. Numerical differentiation of noisy, nonsmooth, multidimensional data. Available at: http://sigport.org/2356.
Rick Chartrand. (2017). "Numerical differentiation of noisy, nonsmooth, multidimensional data." Web.
1. Rick Chartrand. Numerical differentiation of noisy, nonsmooth, multidimensional data [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2356

Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides


Scene text detection is a critical prerequisite for many fascinating applications for vision-based intelligent robots. Existing methods detect texts either using the local information only or casting it as a semantic segmentation problem. They tend to produce a large number of false alarms or cannot separate individual words accurately. In this work, we present an elegant segmentation-aided text detection solution that predicts the word-level bounding boxes using an end-to-end trainable deep convolutional neural network.

Paper Details

Authors:
Yuewei Na, Siyang Li, C.-C. Jay Kuo
Submitted On:
14 November 2017 - 10:09pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP17_Oral_Segmentation-aided_Text_Detection

(18 downloads)

Keywords

Additional Categories

Subscribe

[1] Yuewei Na, Siyang Li, C.-C. Jay Kuo, "Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides ", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2354. Accessed: Dec. 16, 2017.
@article{2354-17,
url = {http://sigport.org/2354},
author = {Yuewei Na; Siyang Li; C.-C. Jay Kuo },
publisher = {IEEE SigPort},
title = {Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides },
year = {2017} }
TY - EJOUR
T1 - Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides
AU - Yuewei Na; Siyang Li; C.-C. Jay Kuo
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2354
ER -
Yuewei Na, Siyang Li, C.-C. Jay Kuo. (2017). Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides . IEEE SigPort. http://sigport.org/2354
Yuewei Na, Siyang Li, C.-C. Jay Kuo, 2017. Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides . Available at: http://sigport.org/2354.
Yuewei Na, Siyang Li, C.-C. Jay Kuo. (2017). "Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides ." Web.
1. Yuewei Na, Siyang Li, C.-C. Jay Kuo. Efficient Segmentation-Aided Text Detection for Intelligent Robots_Slides [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2354

Image Error Concealment based on Joint Sparse Representation and Non-local Similarity


In this paper, an image error concealment method based on joint local sparse representation and non-local similarity is proposed. The proposed method obtains an optimal sparse representation of an image patch, including missing pixels and known neighboring pixels for recovery purpose. At first, a pair of dictionary and a mapping function are simultaneously learned offline from a training data set.

AliAkbari.pdf

PDF icon AliAkbari.pdf (19 downloads)

Paper Details

Authors:
Ali Akbari, Maria Trocan, Bertrand Granado
Submitted On:
13 November 2017 - 9:45pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AliAkbari.pdf

(19 downloads)

Keywords

Subscribe

[1] Ali Akbari, Maria Trocan, Bertrand Granado, "Image Error Concealment based on Joint Sparse Representation and Non-local Similarity", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2347. Accessed: Dec. 16, 2017.
@article{2347-17,
url = {http://sigport.org/2347},
author = {Ali Akbari; Maria Trocan; Bertrand Granado },
publisher = {IEEE SigPort},
title = {Image Error Concealment based on Joint Sparse Representation and Non-local Similarity},
year = {2017} }
TY - EJOUR
T1 - Image Error Concealment based on Joint Sparse Representation and Non-local Similarity
AU - Ali Akbari; Maria Trocan; Bertrand Granado
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2347
ER -
Ali Akbari, Maria Trocan, Bertrand Granado. (2017). Image Error Concealment based on Joint Sparse Representation and Non-local Similarity. IEEE SigPort. http://sigport.org/2347
Ali Akbari, Maria Trocan, Bertrand Granado, 2017. Image Error Concealment based on Joint Sparse Representation and Non-local Similarity. Available at: http://sigport.org/2347.
Ali Akbari, Maria Trocan, Bertrand Granado. (2017). "Image Error Concealment based on Joint Sparse Representation and Non-local Similarity." Web.
1. Ali Akbari, Maria Trocan, Bertrand Granado. Image Error Concealment based on Joint Sparse Representation and Non-local Similarity [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2347

GLOBALSIP presentation


Target re-identification across non-overlapping camera views is a challenging task due to variations in target appearance, illumination, viewpoint and intrinsic parameters of cameras. Brightness transfer function (BTF) was introduced for inter-camera color calibration, and to improve the performance of target re-identification methods. There have been several works based on BTFs, more specifically using weighted BTFs (WBTF), cumulative BTF (CBTF) and mean BTF (MBTF). In this paper, we present a novel method to model the ap-pearance variation across different camera views.

Paper Details

Authors:
Koray Ozcan, Senem Velipasalar
Submitted On:
13 November 2017 - 9:54am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GLOBALSIP_2017.pdf

(17 downloads)

Keywords

Additional Categories

Subscribe

[1] Koray Ozcan, Senem Velipasalar, "GLOBALSIP presentation ", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2336. Accessed: Dec. 16, 2017.
@article{2336-17,
url = {http://sigport.org/2336},
author = {Koray Ozcan; Senem Velipasalar },
publisher = {IEEE SigPort},
title = {GLOBALSIP presentation },
year = {2017} }
TY - EJOUR
T1 - GLOBALSIP presentation
AU - Koray Ozcan; Senem Velipasalar
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2336
ER -
Koray Ozcan, Senem Velipasalar. (2017). GLOBALSIP presentation . IEEE SigPort. http://sigport.org/2336
Koray Ozcan, Senem Velipasalar, 2017. GLOBALSIP presentation . Available at: http://sigport.org/2336.
Koray Ozcan, Senem Velipasalar. (2017). "GLOBALSIP presentation ." Web.
1. Koray Ozcan, Senem Velipasalar. GLOBALSIP presentation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2336

MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT


We present a novel No-Reference (NR) video quality assessment (VQA) algorithm that operates on the sparse represent- ation coefficients of local spatio-temporal (video) volumes. Our work is motivated by the observation that the primary visual cortex adopts a sparse coding strategy to represent visual stimulus. We use the popular K-SVD algorithm to construct spatio-temporal dictionary to sparsely represent local spatio-temporal volumes of natural videos.

Paper Details

Authors:
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya
Submitted On:
12 November 2017 - 9:24am
Short Link:
Type:
Event:
Paper Code:
Document Year:
Cite

Document Files

Slides for paper 1590 GlobalSIP 2017

(19 downloads)

Keywords

Subscribe

[1] Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya, "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2315. Accessed: Dec. 16, 2017.
@article{2315-17,
url = {http://sigport.org/2315},
author = {Muhammed Shabeer P;. Saurabhchand Bhati; Sumohana S. Channappayya },
publisher = {IEEE SigPort},
title = {MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT},
year = {2017} }
TY - EJOUR
T1 - MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT
AU - Muhammed Shabeer P;. Saurabhchand Bhati; Sumohana S. Channappayya
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2315
ER -
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya. (2017). MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. IEEE SigPort. http://sigport.org/2315
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya, 2017. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. Available at: http://sigport.org/2315.
Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya. (2017). "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT." Web.
1. Muhammed Shabeer P,. Saurabhchand Bhati, Sumohana S. Channappayya. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2315

Hand Segmentation for Hand-Object Interaction from Depth Map

Paper Details

Authors:
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen
Submitted On:
12 November 2017 - 7:55am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP_byeong.pdf

(14 downloads)

Keywords

Subscribe

[1] Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen, "Hand Segmentation for Hand-Object Interaction from Depth Map", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2314. Accessed: Dec. 16, 2017.
@article{2314-17,
url = {http://sigport.org/2314},
author = {Byeongkeun Kang; Kar-Han Tan; Nan Jiang; Hung-Shuo Tai; Daniel Tretter; Truong Nguyen },
publisher = {IEEE SigPort},
title = {Hand Segmentation for Hand-Object Interaction from Depth Map},
year = {2017} }
TY - EJOUR
T1 - Hand Segmentation for Hand-Object Interaction from Depth Map
AU - Byeongkeun Kang; Kar-Han Tan; Nan Jiang; Hung-Shuo Tai; Daniel Tretter; Truong Nguyen
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2314
ER -
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen. (2017). Hand Segmentation for Hand-Object Interaction from Depth Map. IEEE SigPort. http://sigport.org/2314
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen, 2017. Hand Segmentation for Hand-Object Interaction from Depth Map. Available at: http://sigport.org/2314.
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen. (2017). "Hand Segmentation for Hand-Object Interaction from Depth Map." Web.
1. Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel Tretter, Truong Nguyen. Hand Segmentation for Hand-Object Interaction from Depth Map [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2314

MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT


We present a novel No-Reference (FR) video quality assessment
(VQA) algorithm that operates on the sparse representation
coefficients of local spatio-temporal (video) volumes.
Our work is motivated by the observation that the primary
visual cortex adopts a sparse coding strategy to represent
visual stimulus. We use the popular K-SVD algorithm to construct
spatio-temporal dictionaries to sparsely represent local
spatio-temporal volumes of natural videos. We empirically
demonstrate that the histogram of the sparse representations

Paper Details

Authors:
Saurabhchand Bhati
Submitted On:
12 November 2017 - 7:49am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Globalsip_2017_Paper#1590_muhammed_shabeer.pdf

(14 downloads)

Keywords

Subscribe

[1] Saurabhchand Bhati, "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2313. Accessed: Dec. 16, 2017.
@article{2313-17,
url = {http://sigport.org/2313},
author = {Saurabhchand Bhati },
publisher = {IEEE SigPort},
title = {MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT},
year = {2017} }
TY - EJOUR
T1 - MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT
AU - Saurabhchand Bhati
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2313
ER -
Saurabhchand Bhati. (2017). MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. IEEE SigPort. http://sigport.org/2313
Saurabhchand Bhati, 2017. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. Available at: http://sigport.org/2313.
Saurabhchand Bhati. (2017). "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT." Web.
1. Saurabhchand Bhati. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2313

MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT


We present a novel No-Reference (FR) video quality assessment
(VQA) algorithm that operates on the sparse representation
coefficients of local spatio-temporal (video) volumes.
Our work is motivated by the observation that the primary
visual cortex adopts a sparse coding strategy to represent
visual stimulus. We use the popular K-SVD algorithm to construct
spatio-temporal dictionaries to sparsely represent local
spatio-temporal volumes of natural videos. We empirically
demonstrate that the histogram of the sparse representations

Paper Details

Authors:
Saurabhchand Bhati
Submitted On:
12 November 2017 - 7:49am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Globalsip_2017_Paper#1590_muhammed_shabeer.pdf

(14 downloads)

Keywords

Subscribe

[1] Saurabhchand Bhati, "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2312. Accessed: Dec. 16, 2017.
@article{2312-17,
url = {http://sigport.org/2312},
author = {Saurabhchand Bhati },
publisher = {IEEE SigPort},
title = {MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT},
year = {2017} }
TY - EJOUR
T1 - MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT
AU - Saurabhchand Bhati
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2312
ER -
Saurabhchand Bhati. (2017). MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. IEEE SigPort. http://sigport.org/2312
Saurabhchand Bhati, 2017. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT. Available at: http://sigport.org/2312.
Saurabhchand Bhati. (2017). "MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT." Web.
1. Saurabhchand Bhati. MODELING SPARSE SPATIO-TEMPORAL REPRESENTATIONS FOR NO-REFERENCE VIDEO QUALITY ASSESSMENT [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2312

A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming


In an HTTP streaming framework, continuous time quality evaluation is necessary to monitor the time-varying subjective quality (TVSQ) of the videos resulting from rate adaptation. In this paper, we present a novel learning framework for TVSQ assessment using linear regression under the Reduced-Reference (RR) and the No-Reference (NR) settings. The proposed framework relies on objective short time quality estimates and past TVSQs for predicting the present TVSQ.

Paper Details

Authors:
Dendi Sathya Veera Reddy, Soumen Chakraborty, Hemanth P. Sethuram, Kiran Kuchi, Abhinav Kumar
Submitted On:
12 November 2017 - 7:12am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

GlobalSIP2017_slides.pdf

(17 downloads)

Keywords

Subscribe

[1] Dendi Sathya Veera Reddy, Soumen Chakraborty, Hemanth P. Sethuram, Kiran Kuchi, Abhinav Kumar, "A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/2311. Accessed: Dec. 16, 2017.
@article{2311-17,
url = {http://sigport.org/2311},
author = {Dendi Sathya Veera Reddy; Soumen Chakraborty; Hemanth P. Sethuram; Kiran Kuchi; Abhinav Kumar },
publisher = {IEEE SigPort},
title = {A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming},
year = {2017} }
TY - EJOUR
T1 - A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming
AU - Dendi Sathya Veera Reddy; Soumen Chakraborty; Hemanth P. Sethuram; Kiran Kuchi; Abhinav Kumar
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/2311
ER -
Dendi Sathya Veera Reddy, Soumen Chakraborty, Hemanth P. Sethuram, Kiran Kuchi, Abhinav Kumar. (2017). A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming. IEEE SigPort. http://sigport.org/2311
Dendi Sathya Veera Reddy, Soumen Chakraborty, Hemanth P. Sethuram, Kiran Kuchi, Abhinav Kumar, 2017. A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming. Available at: http://sigport.org/2311.
Dendi Sathya Veera Reddy, Soumen Chakraborty, Hemanth P. Sethuram, Kiran Kuchi, Abhinav Kumar. (2017). "A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming." Web.
1. Dendi Sathya Veera Reddy, Soumen Chakraborty, Hemanth P. Sethuram, Kiran Kuchi, Abhinav Kumar. A Linear Regression Framework For Assessing Time-Varying Subjective Quality in HTTP Streaming [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/2311

Pages