Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION


Road detection is a key component of Advanced Driving Assistance Systems, which provides valid space and candidate regions of objects for vehicles. Mainstream road detection methods have focused on extracting discriminative features. In this paper, we propose a robust feature fusion framework, called “Feature++”, which is combined with superpixel feature and 3D feature extracted from stereo images. Then a neural network classifier is been trained to decide whether a superpixel is road region or not. Finally, the classified results are further refined by conditional random field.

Paper Details

Authors:
Guorong Cai,Zhun Zhong,Songzhi Su
Submitted On:
8 May 2017 - 5:17am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

poster_hwl.pdf

(27 downloads)

Keywords

Additional Categories

Subscribe

[1] Guorong Cai,Zhun Zhong,Songzhi Su, "FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1794. Accessed: Jun. 28, 2017.
@article{1794-17,
url = {http://sigport.org/1794},
author = {Guorong Cai;Zhun Zhong;Songzhi Su },
publisher = {IEEE SigPort},
title = {FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION},
year = {2017} }
TY - EJOUR
T1 - FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION
AU - Guorong Cai;Zhun Zhong;Songzhi Su
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1794
ER -
Guorong Cai,Zhun Zhong,Songzhi Su. (2017). FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION. IEEE SigPort. http://sigport.org/1794
Guorong Cai,Zhun Zhong,Songzhi Su, 2017. FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION. Available at: http://sigport.org/1794.
Guorong Cai,Zhun Zhong,Songzhi Su. (2017). "FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION." Web.
1. Guorong Cai,Zhun Zhong,Songzhi Su. FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1794

FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION


Road detection is a key component of Advanced Driving Assistance Systems, which provides valid space and candidate regions of objects for vehicles. Mainstream road detection methods have focused on extracting discriminative features. In this paper, we propose a robust feature fusion framework, called “Feature++”, which is combined with superpixel feature and 3D feature extracted from stereo images. Then a neural network classifier is been trained to decide whether a superpixel is road region or not. Finally, the classified results are further refined by conditional random field.

Paper Details

Authors:
Guorong Cai,Zhun Zhong,Songzhi Su
Submitted On:
8 May 2017 - 5:17am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

poster_hwl.pdf

(24 downloads)

Keywords

Additional Categories

Subscribe

[1] Guorong Cai,Zhun Zhong,Songzhi Su, "FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1793. Accessed: Jun. 28, 2017.
@article{1793-17,
url = {http://sigport.org/1793},
author = {Guorong Cai;Zhun Zhong;Songzhi Su },
publisher = {IEEE SigPort},
title = {FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION},
year = {2017} }
TY - EJOUR
T1 - FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION
AU - Guorong Cai;Zhun Zhong;Songzhi Su
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1793
ER -
Guorong Cai,Zhun Zhong,Songzhi Su. (2017). FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION. IEEE SigPort. http://sigport.org/1793
Guorong Cai,Zhun Zhong,Songzhi Su, 2017. FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION. Available at: http://sigport.org/1793.
Guorong Cai,Zhun Zhong,Songzhi Su. (2017). "FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION." Web.
1. Guorong Cai,Zhun Zhong,Songzhi Su. FEATURE++: CROSS DIMENSION FEATURE FUSION FOR ROAD DETECTION [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1793

Patch-based Multiple View Image Denoising with Occlusion Handling

Paper Details

Authors:
Shiwei Zhou, Yu Hen Hu, Hongrui Jiang
Submitted On:
23 March 2017 - 2:49pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP_poster.pdf

(418 downloads)

Keywords

Subscribe

[1] Shiwei Zhou, Yu Hen Hu, Hongrui Jiang, "Patch-based Multiple View Image Denoising with Occlusion Handling", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1782. Accessed: Jun. 28, 2017.
@article{1782-17,
url = {http://sigport.org/1782},
author = {Shiwei Zhou; Yu Hen Hu; Hongrui Jiang },
publisher = {IEEE SigPort},
title = {Patch-based Multiple View Image Denoising with Occlusion Handling},
year = {2017} }
TY - EJOUR
T1 - Patch-based Multiple View Image Denoising with Occlusion Handling
AU - Shiwei Zhou; Yu Hen Hu; Hongrui Jiang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1782
ER -
Shiwei Zhou, Yu Hen Hu, Hongrui Jiang. (2017). Patch-based Multiple View Image Denoising with Occlusion Handling. IEEE SigPort. http://sigport.org/1782
Shiwei Zhou, Yu Hen Hu, Hongrui Jiang, 2017. Patch-based Multiple View Image Denoising with Occlusion Handling. Available at: http://sigport.org/1782.
Shiwei Zhou, Yu Hen Hu, Hongrui Jiang. (2017). "Patch-based Multiple View Image Denoising with Occlusion Handling." Web.
1. Shiwei Zhou, Yu Hen Hu, Hongrui Jiang. Patch-based Multiple View Image Denoising with Occlusion Handling [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1782

SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT


In this paper, we propose surrounding adaptive tone mapping in displayed images under ambient light. Under strong ambient light, the displayed images on the screen are darkly perceived by human eyes, especially in dark regions. We deal with the ambient light problem in mobile devices by brightness enhancement and adaptive tone mapping. First, we perform brightness compensation in dark regions using Bartleson-Breneman equation which represents lightness effect on the image under different surrounding illuminations.

Paper Details

Authors:
Lu Wang, Cheolkon Jung
Submitted On:
14 March 2017 - 11:37pm
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICASSP2017_Surrounding.pdf

(49 downloads)

Keywords

Subscribe

[1] Lu Wang, Cheolkon Jung, "SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1767. Accessed: Jun. 28, 2017.
@article{1767-17,
url = {http://sigport.org/1767},
author = {Lu Wang; Cheolkon Jung },
publisher = {IEEE SigPort},
title = {SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT},
year = {2017} }
TY - EJOUR
T1 - SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT
AU - Lu Wang; Cheolkon Jung
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1767
ER -
Lu Wang, Cheolkon Jung. (2017). SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT. IEEE SigPort. http://sigport.org/1767
Lu Wang, Cheolkon Jung, 2017. SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT. Available at: http://sigport.org/1767.
Lu Wang, Cheolkon Jung. (2017). "SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT." Web.
1. Lu Wang, Cheolkon Jung. SURROUNDING ADAPTIVE TONE MAPPING IN DISPLAYED IMAGES UNDER AMBIENT LIGHT [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1767

Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN)


This paper proposes a psychologically inspired convolutional neural network (PI-CNN) to achieve automatic facial beauty prediction. Different from the previous methods, the PI-CNN is a hierarchical model that facilitates both the facial beauty representation learning and predictor training. Inspired by the recent psychological studies, significant appearance features of facial detail, lighting and color were used to optimize the PI-CNN facial beauty predictor using a new cascaded fine-tuning method.

Paper Details

Authors:
Jie Xu, Lianwen Jin, Lingyu Liang, Ziyong Feng, Duorui Xie, Huiyun Mao
Submitted On:
12 March 2017 - 12:12pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP2017_Poster.pdf

(229 downloads)

Keywords

Subscribe

[1] Jie Xu, Lianwen Jin, Lingyu Liang, Ziyong Feng, Duorui Xie, Huiyun Mao, "Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN)", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1749. Accessed: Jun. 28, 2017.
@article{1749-17,
url = {http://sigport.org/1749},
author = {Jie Xu; Lianwen Jin; Lingyu Liang; Ziyong Feng; Duorui Xie; Huiyun Mao },
publisher = {IEEE SigPort},
title = {Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN)},
year = {2017} }
TY - EJOUR
T1 - Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN)
AU - Jie Xu; Lianwen Jin; Lingyu Liang; Ziyong Feng; Duorui Xie; Huiyun Mao
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1749
ER -
Jie Xu, Lianwen Jin, Lingyu Liang, Ziyong Feng, Duorui Xie, Huiyun Mao. (2017). Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN). IEEE SigPort. http://sigport.org/1749
Jie Xu, Lianwen Jin, Lingyu Liang, Ziyong Feng, Duorui Xie, Huiyun Mao, 2017. Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN). Available at: http://sigport.org/1749.
Jie Xu, Lianwen Jin, Lingyu Liang, Ziyong Feng, Duorui Xie, Huiyun Mao. (2017). "Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN)." Web.
1. Jie Xu, Lianwen Jin, Lingyu Liang, Ziyong Feng, Duorui Xie, Huiyun Mao. Facial Attractiveness Prediction Using Psychologically Inspired Convolutional Neural Network (PI-CNN) [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1749

Spatio-Temporal Binary Video Inpainting via Threshold Dynamics


We propose a new variational method for the completion of moving shapes through binary video inpainting that works by smoothly recovering the objects into an inpainting hole. We solve it by a simple dynamic shape analysis algorithm based on threshold dynamics. The model takes into account the optical flow and motion occlusions. The resulting inpainting algorithm diffuses the available information along the space and the visible trajectories of the pixels in time. We show its performance with examples from the Sintel dataset, which contains complex object motion and occlusions.

Paper Details

Authors:
M. Oliver, R.P Palomares, C. Ballester, G. Haro
Submitted On:
10 March 2017 - 5:38am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

posterICASSP.pdf

(40 downloads)

Keywords

Subscribe

[1] M. Oliver, R.P Palomares, C. Ballester, G. Haro, "Spatio-Temporal Binary Video Inpainting via Threshold Dynamics", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1734. Accessed: Jun. 28, 2017.
@article{1734-17,
url = {http://sigport.org/1734},
author = {M. Oliver; R.P Palomares; C. Ballester; G. Haro },
publisher = {IEEE SigPort},
title = {Spatio-Temporal Binary Video Inpainting via Threshold Dynamics},
year = {2017} }
TY - EJOUR
T1 - Spatio-Temporal Binary Video Inpainting via Threshold Dynamics
AU - M. Oliver; R.P Palomares; C. Ballester; G. Haro
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1734
ER -
M. Oliver, R.P Palomares, C. Ballester, G. Haro. (2017). Spatio-Temporal Binary Video Inpainting via Threshold Dynamics. IEEE SigPort. http://sigport.org/1734
M. Oliver, R.P Palomares, C. Ballester, G. Haro, 2017. Spatio-Temporal Binary Video Inpainting via Threshold Dynamics. Available at: http://sigport.org/1734.
M. Oliver, R.P Palomares, C. Ballester, G. Haro. (2017). "Spatio-Temporal Binary Video Inpainting via Threshold Dynamics." Web.
1. M. Oliver, R.P Palomares, C. Ballester, G. Haro. Spatio-Temporal Binary Video Inpainting via Threshold Dynamics [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1734

Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities

Paper Details

Authors:
Abderrahim HALIMI,Jose Bioucas-Dias, Nicolas Dobigeon, Gerald S. Buller, Stephen McLaughlin
Submitted On:
9 March 2017 - 8:38pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICASSP1701

(34 downloads)

Keywords

Subscribe

[1] Abderrahim HALIMI,Jose Bioucas-Dias, Nicolas Dobigeon, Gerald S. Buller, Stephen McLaughlin, "Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1733. Accessed: Jun. 28, 2017.
@article{1733-17,
url = {http://sigport.org/1733},
author = {Abderrahim HALIMI;Jose Bioucas-Dias; Nicolas Dobigeon; Gerald S. Buller; Stephen McLaughlin },
publisher = {IEEE SigPort},
title = {Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities},
year = {2017} }
TY - EJOUR
T1 - Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities
AU - Abderrahim HALIMI;Jose Bioucas-Dias; Nicolas Dobigeon; Gerald S. Buller; Stephen McLaughlin
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1733
ER -
Abderrahim HALIMI,Jose Bioucas-Dias, Nicolas Dobigeon, Gerald S. Buller, Stephen McLaughlin. (2017). Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities. IEEE SigPort. http://sigport.org/1733
Abderrahim HALIMI,Jose Bioucas-Dias, Nicolas Dobigeon, Gerald S. Buller, Stephen McLaughlin, 2017. Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities. Available at: http://sigport.org/1733.
Abderrahim HALIMI,Jose Bioucas-Dias, Nicolas Dobigeon, Gerald S. Buller, Stephen McLaughlin. (2017). "Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities." Web.
1. Abderrahim HALIMI,Jose Bioucas-Dias, Nicolas Dobigeon, Gerald S. Buller, Stephen McLaughlin. Fast Hyperspectral Unmixing in Presence of Sparse Multiple Scattering Nonlinearities [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1733

STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES

Paper Details

Authors:
samuel pinilla, camilo noriega, henry arguello
Submitted On:
9 March 2017 - 2:17pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

main.pdf

(39 downloads)

Keywords

Subscribe

[1] samuel pinilla, camilo noriega, henry arguello, "STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1726. Accessed: Jun. 28, 2017.
@article{1726-17,
url = {http://sigport.org/1726},
author = {samuel pinilla; camilo noriega; henry arguello },
publisher = {IEEE SigPort},
title = {STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES},
year = {2017} }
TY - EJOUR
T1 - STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES
AU - samuel pinilla; camilo noriega; henry arguello
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1726
ER -
samuel pinilla, camilo noriega, henry arguello. (2017). STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES. IEEE SigPort. http://sigport.org/1726
samuel pinilla, camilo noriega, henry arguello, 2017. STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES. Available at: http://sigport.org/1726.
samuel pinilla, camilo noriega, henry arguello. (2017). "STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES." Web.
1. samuel pinilla, camilo noriega, henry arguello. STOCHASTIC TRUNCATED WIRTINGER FLOW ALGORITHM FOR PHASE RETRIEVAL USING BOOLEAN CODED APERTURES [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1726

Quality Assessment of Mobile Videos with In-Capture Distortions

Paper Details

Authors:
Deepti Ghadiyaram, Janice Pan, Alan Bovik, Anush Moorthy, Prasanjit Panda, and Kai-Chieh Yang
Submitted On:
9 March 2017 - 2:07pm
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

incapture copy.pptx

(33 downloads)

Keywords

Subscribe

[1] Deepti Ghadiyaram, Janice Pan, Alan Bovik, Anush Moorthy, Prasanjit Panda, and Kai-Chieh Yang, "Quality Assessment of Mobile Videos with In-Capture Distortions", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1725. Accessed: Jun. 28, 2017.
@article{1725-17,
url = {http://sigport.org/1725},
author = {Deepti Ghadiyaram; Janice Pan; Alan Bovik; Anush Moorthy; Prasanjit Panda; and Kai-Chieh Yang },
publisher = {IEEE SigPort},
title = {Quality Assessment of Mobile Videos with In-Capture Distortions},
year = {2017} }
TY - EJOUR
T1 - Quality Assessment of Mobile Videos with In-Capture Distortions
AU - Deepti Ghadiyaram; Janice Pan; Alan Bovik; Anush Moorthy; Prasanjit Panda; and Kai-Chieh Yang
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1725
ER -
Deepti Ghadiyaram, Janice Pan, Alan Bovik, Anush Moorthy, Prasanjit Panda, and Kai-Chieh Yang. (2017). Quality Assessment of Mobile Videos with In-Capture Distortions. IEEE SigPort. http://sigport.org/1725
Deepti Ghadiyaram, Janice Pan, Alan Bovik, Anush Moorthy, Prasanjit Panda, and Kai-Chieh Yang, 2017. Quality Assessment of Mobile Videos with In-Capture Distortions. Available at: http://sigport.org/1725.
Deepti Ghadiyaram, Janice Pan, Alan Bovik, Anush Moorthy, Prasanjit Panda, and Kai-Chieh Yang. (2017). "Quality Assessment of Mobile Videos with In-Capture Distortions." Web.
1. Deepti Ghadiyaram, Janice Pan, Alan Bovik, Anush Moorthy, Prasanjit Panda, and Kai-Chieh Yang. Quality Assessment of Mobile Videos with In-Capture Distortions [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1725

Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation


In this paper, we propose retinex-based perceptual contrast enhancement in images using luminance adaptation. We use the retinex theory to decompose an image into illumination and reflectance layers, and adopt luminance adaptation to handle the illumination layer which causes detail loss. First, we obtain the illumination layer using adaptive Gaussian filtering to remove halo artifacts. Then, we adaptively remove illumination of the illumination layer in the multi-scale retinex (MSR) process based on luminance adaptation to preserve details.

Paper Details

Authors:
Submitted On:
15 March 2017 - 2:30am
Short Link:
Type:
Event:
Document Year:
Cite

Document Files

ICASSP2017_Retinex_final(2).pdf

(50 downloads)

Keywords

Subscribe

[1] , "Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation", IEEE SigPort, 2017. [Online]. Available: http://sigport.org/1722. Accessed: Jun. 28, 2017.
@article{1722-17,
url = {http://sigport.org/1722},
author = { },
publisher = {IEEE SigPort},
title = {Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation},
year = {2017} }
TY - EJOUR
T1 - Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation
AU -
PY - 2017
PB - IEEE SigPort
UR - http://sigport.org/1722
ER -
. (2017). Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation. IEEE SigPort. http://sigport.org/1722
, 2017. Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation. Available at: http://sigport.org/1722.
. (2017). "Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation." Web.
1. . Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation [Internet]. IEEE SigPort; 2017. Available from : http://sigport.org/1722

Pages