Sorry, you need to enable JavaScript to visit this website.

Image/Video Processing

Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems


The recent trend towards miniaturization of mobile projectors is allowing new forms of information presentation and interaction. Projectors can easily be moved freely in space either by humans or by mobile robots. This paper presents a technique to dynamically track the orientation and position of the projection plane only by analyzing the distortion of the projection by itself, independent of the presented content. It allows distortion-free projection with a fixed metric size for moving projector-camera systems.

Paper Details

Authors:
Niklas Gard, Peter Eisert
Submitted On:
5 October 2018 - 3:47am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

2018_ICIP_Poster.pdf

(36)

Subscribe

[1] Niklas Gard, Peter Eisert, "Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3513. Accessed: Apr. 22, 2019.
@article{3513-18,
url = {http://sigport.org/3513},
author = {Niklas Gard; Peter Eisert },
publisher = {IEEE SigPort},
title = {Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems},
year = {2018} }
TY - EJOUR
T1 - Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems
AU - Niklas Gard; Peter Eisert
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3513
ER -
Niklas Gard, Peter Eisert. (2018). Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems. IEEE SigPort. http://sigport.org/3513
Niklas Gard, Peter Eisert, 2018. Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems. Available at: http://sigport.org/3513.
Niklas Gard, Peter Eisert. (2018). "Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems." Web.
1. Niklas Gard, Peter Eisert. Markerless Closed-Loop Projection Plane Tracking for Mobile Projector-Camera Systems [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3513

A deep neural network for oil spill semantic segmentation in SAR images


Oil spills pose a major threat of the oceanic and coastal environments, hence, an automatic detection and a continuous monitoring system comprises an appealing option for minimizing the response time of relevant operations. Numerous efforts have been conducted towards such solutions by exploiting a variety of sensing systems such as satellite Synthetic Aperture Radar (SAR) which can identify oil spills over sea surfaces in any environmental conditions and operational time. Such approaches include the use of artificial neural networks which effectively identify the polluted areas.

Paper Details

Authors:
Georgios Orfanidis, Konstantinos Ioannidis, Konstantinos Avgerinakis, Stefanos Vrochidis, Ioannis Kompatsiaris
Submitted On:
5 October 2018 - 3:37am
Short Link:
Type:
Event:
Presenter's Name:
Document Year:
Cite

Document Files

Poster_ICIP_2018_DCNN_Oil spill.pdf

(6)

Subscribe

[1] Georgios Orfanidis, Konstantinos Ioannidis, Konstantinos Avgerinakis, Stefanos Vrochidis, Ioannis Kompatsiaris, "A deep neural network for oil spill semantic segmentation in SAR images", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3512. Accessed: Apr. 22, 2019.
@article{3512-18,
url = {http://sigport.org/3512},
author = {Georgios Orfanidis; Konstantinos Ioannidis; Konstantinos Avgerinakis; Stefanos Vrochidis; Ioannis Kompatsiaris },
publisher = {IEEE SigPort},
title = {A deep neural network for oil spill semantic segmentation in SAR images},
year = {2018} }
TY - EJOUR
T1 - A deep neural network for oil spill semantic segmentation in SAR images
AU - Georgios Orfanidis; Konstantinos Ioannidis; Konstantinos Avgerinakis; Stefanos Vrochidis; Ioannis Kompatsiaris
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3512
ER -
Georgios Orfanidis, Konstantinos Ioannidis, Konstantinos Avgerinakis, Stefanos Vrochidis, Ioannis Kompatsiaris. (2018). A deep neural network for oil spill semantic segmentation in SAR images. IEEE SigPort. http://sigport.org/3512
Georgios Orfanidis, Konstantinos Ioannidis, Konstantinos Avgerinakis, Stefanos Vrochidis, Ioannis Kompatsiaris, 2018. A deep neural network for oil spill semantic segmentation in SAR images. Available at: http://sigport.org/3512.
Georgios Orfanidis, Konstantinos Ioannidis, Konstantinos Avgerinakis, Stefanos Vrochidis, Ioannis Kompatsiaris. (2018). "A deep neural network for oil spill semantic segmentation in SAR images." Web.
1. Georgios Orfanidis, Konstantinos Ioannidis, Konstantinos Avgerinakis, Stefanos Vrochidis, Ioannis Kompatsiaris. A deep neural network for oil spill semantic segmentation in SAR images [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3512

Learning Illuminant Estimation from Object Recognition


In this paper we present a deep learning method to estimate the illuminant of an image. Our model is not trained with illuminant annotations, but with the objective of improving performance on an auxiliary task such as object recognition. To the best of our knowledge, this is the first example of a deep learning architecture for illuminant estimation that is trained without ground truth illuminants. We evaluate our solution on standard datasets for color constancy, and compare it with state of the art methods.

Paper Details

Authors:
Marco Buzzelli, Joost van de Weijer, Raimondo Schettini
Submitted On:
5 October 2018 - 3:02am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Learning Illuminant Estimation from Object Recognition

(43)

Subscribe

[1] Marco Buzzelli, Joost van de Weijer, Raimondo Schettini, "Learning Illuminant Estimation from Object Recognition", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3507. Accessed: Apr. 22, 2019.
@article{3507-18,
url = {http://sigport.org/3507},
author = {Marco Buzzelli; Joost van de Weijer; Raimondo Schettini },
publisher = {IEEE SigPort},
title = {Learning Illuminant Estimation from Object Recognition},
year = {2018} }
TY - EJOUR
T1 - Learning Illuminant Estimation from Object Recognition
AU - Marco Buzzelli; Joost van de Weijer; Raimondo Schettini
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3507
ER -
Marco Buzzelli, Joost van de Weijer, Raimondo Schettini. (2018). Learning Illuminant Estimation from Object Recognition. IEEE SigPort. http://sigport.org/3507
Marco Buzzelli, Joost van de Weijer, Raimondo Schettini, 2018. Learning Illuminant Estimation from Object Recognition. Available at: http://sigport.org/3507.
Marco Buzzelli, Joost van de Weijer, Raimondo Schettini. (2018). "Learning Illuminant Estimation from Object Recognition." Web.
1. Marco Buzzelli, Joost van de Weijer, Raimondo Schettini. Learning Illuminant Estimation from Object Recognition [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3507

Infrared Image Colorization Using a S-shape Network


This paper proposes a novel approach for colorizing near infrared (NIR) images using a S-shape network (SNet). The proposed approach is based on the usage of an encoder-decoder architecture followed with a secondary assistant network. The encoder-decoder consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. The assistant network is a shallow

Paper Details

Authors:
Ziyue Dong, Sei-ichiro Kamata, Toby P.Breckon
Submitted On:
5 October 2018 - 2:52am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Infrared Image Colorization Using a S-shape Network

(5)

Subscribe

[1] Ziyue Dong, Sei-ichiro Kamata, Toby P.Breckon, "Infrared Image Colorization Using a S-shape Network", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3505. Accessed: Apr. 22, 2019.
@article{3505-18,
url = {http://sigport.org/3505},
author = {Ziyue Dong; Sei-ichiro Kamata; Toby P.Breckon },
publisher = {IEEE SigPort},
title = {Infrared Image Colorization Using a S-shape Network},
year = {2018} }
TY - EJOUR
T1 - Infrared Image Colorization Using a S-shape Network
AU - Ziyue Dong; Sei-ichiro Kamata; Toby P.Breckon
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3505
ER -
Ziyue Dong, Sei-ichiro Kamata, Toby P.Breckon. (2018). Infrared Image Colorization Using a S-shape Network. IEEE SigPort. http://sigport.org/3505
Ziyue Dong, Sei-ichiro Kamata, Toby P.Breckon, 2018. Infrared Image Colorization Using a S-shape Network. Available at: http://sigport.org/3505.
Ziyue Dong, Sei-ichiro Kamata, Toby P.Breckon. (2018). "Infrared Image Colorization Using a S-shape Network." Web.
1. Ziyue Dong, Sei-ichiro Kamata, Toby P.Breckon. Infrared Image Colorization Using a S-shape Network [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3505

DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD


In this work, we propose an algorithm to estimate the depth map of a scene using defocused images. In particular, the depth map is estimated using two defocused images with different depth-of-field for the same scene. Similar to the approach of the general depth from defocus (DFD), the proposed algorithm obtains the depth information from the

Paper Details

Authors:
Kyoung Mu Lee
Submitted On:
5 October 2018 - 2:44am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

ICIP2018_Poster.pdf

(81)

Subscribe

[1] Kyoung Mu Lee, "DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3504. Accessed: Apr. 22, 2019.
@article{3504-18,
url = {http://sigport.org/3504},
author = {Kyoung Mu Lee },
publisher = {IEEE SigPort},
title = {DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD},
year = {2018} }
TY - EJOUR
T1 - DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD
AU - Kyoung Mu Lee
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3504
ER -
Kyoung Mu Lee. (2018). DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD. IEEE SigPort. http://sigport.org/3504
Kyoung Mu Lee, 2018. DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD. Available at: http://sigport.org/3504.
Kyoung Mu Lee. (2018). "DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD." Web.
1. Kyoung Mu Lee. DEPTH ESTIMATION NETWORK FOR DUAL DEFOCUSED IMAGES WITH DIFFERENT DEPTH-OF-FIELD [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3504

Class-specific Coders for Hyper-spectral Image Classification


In this paper, we introduce the paradigm of class specific
coders (CSC) for classification of hyperspectral images
(HSI). Apparently, CSC are defined as a set of distinct
encoder-decoder (henceforth called a coder) networks where
a given coder is trained on the samples of a particular class.
In contrast to auto-encoders (AE) which learn an identity
mapping of data in an unsupervised fashion, the CSC model,
on the other hand, learns re-constructive mappings for all
possible pairs of training samples for each class in separate

Paper Details

Authors:
Sanatan Sharma, Akashdeep Goel, Omkar Gune, Biplab Banerjee, Subhasis Chaudhuri
Submitted On:
5 October 2018 - 2:35am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

PPT for the paper

(5)

Subscribe

[1] Sanatan Sharma, Akashdeep Goel, Omkar Gune, Biplab Banerjee, Subhasis Chaudhuri, "Class-specific Coders for Hyper-spectral Image Classification", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3503. Accessed: Apr. 22, 2019.
@article{3503-18,
url = {http://sigport.org/3503},
author = {Sanatan Sharma; Akashdeep Goel; Omkar Gune; Biplab Banerjee; Subhasis Chaudhuri },
publisher = {IEEE SigPort},
title = {Class-specific Coders for Hyper-spectral Image Classification},
year = {2018} }
TY - EJOUR
T1 - Class-specific Coders for Hyper-spectral Image Classification
AU - Sanatan Sharma; Akashdeep Goel; Omkar Gune; Biplab Banerjee; Subhasis Chaudhuri
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3503
ER -
Sanatan Sharma, Akashdeep Goel, Omkar Gune, Biplab Banerjee, Subhasis Chaudhuri. (2018). Class-specific Coders for Hyper-spectral Image Classification. IEEE SigPort. http://sigport.org/3503
Sanatan Sharma, Akashdeep Goel, Omkar Gune, Biplab Banerjee, Subhasis Chaudhuri, 2018. Class-specific Coders for Hyper-spectral Image Classification. Available at: http://sigport.org/3503.
Sanatan Sharma, Akashdeep Goel, Omkar Gune, Biplab Banerjee, Subhasis Chaudhuri. (2018). "Class-specific Coders for Hyper-spectral Image Classification." Web.
1. Sanatan Sharma, Akashdeep Goel, Omkar Gune, Biplab Banerjee, Subhasis Chaudhuri. Class-specific Coders for Hyper-spectral Image Classification [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3503

IMAGE STITCHING FOR DUAL FISHEYE CAMERAS


Panoramic photography creates stunning immersive visual experiences for viewers. In this paper, we investigate how to seamlessly stitch a pair of images captured by two uncalibrated, back-to-back, 195-degree fisheye cameras to generate a surround view of a 3D scene. It is a challenging task because the two camera centers are displaced and because the common region is the most distorted area. To enhance the robustness of feature matching and hence the quality of stitching, we propose a novel technique that projects the image rectilinearly onto an equirectangular plane.

Paper Details

Authors:
I-Chan Lo, Kuang-Tsu Shih, Homer Chen
Submitted On:
5 October 2018 - 2:39am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster

(692)

Subscribe

[1] I-Chan Lo, Kuang-Tsu Shih, Homer Chen, "IMAGE STITCHING FOR DUAL FISHEYE CAMERAS", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3502. Accessed: Apr. 22, 2019.
@article{3502-18,
url = {http://sigport.org/3502},
author = {I-Chan Lo; Kuang-Tsu Shih; Homer Chen },
publisher = {IEEE SigPort},
title = {IMAGE STITCHING FOR DUAL FISHEYE CAMERAS},
year = {2018} }
TY - EJOUR
T1 - IMAGE STITCHING FOR DUAL FISHEYE CAMERAS
AU - I-Chan Lo; Kuang-Tsu Shih; Homer Chen
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3502
ER -
I-Chan Lo, Kuang-Tsu Shih, Homer Chen. (2018). IMAGE STITCHING FOR DUAL FISHEYE CAMERAS. IEEE SigPort. http://sigport.org/3502
I-Chan Lo, Kuang-Tsu Shih, Homer Chen, 2018. IMAGE STITCHING FOR DUAL FISHEYE CAMERAS. Available at: http://sigport.org/3502.
I-Chan Lo, Kuang-Tsu Shih, Homer Chen. (2018). "IMAGE STITCHING FOR DUAL FISHEYE CAMERAS." Web.
1. I-Chan Lo, Kuang-Tsu Shih, Homer Chen. IMAGE STITCHING FOR DUAL FISHEYE CAMERAS [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3502

S3D: Stacking Segmental P3D for Action Quality Assessment


Action quality assessment is crucial in areas of sports, surgery and assembly line where action skills can be evaluated. In this paper, we propose the Segment-based P3D-fused network S3D built-upon ED-TCN and push the performance on the UNLV-Dive dataset by a significant margin. We verify that segment-aware training performs better than full-video training which turns out to focus on the water spray. We show that temporal segmentation can be embedded with few efforts.

Paper Details

Authors:
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran
Submitted On:
5 October 2018 - 2:08am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

AI Referee: Score Olympic Games

(99)

Subscribe

[1] Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, "S3D: Stacking Segmental P3D for Action Quality Assessment", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3501. Accessed: Apr. 22, 2019.
@article{3501-18,
url = {http://sigport.org/3501},
author = {Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran },
publisher = {IEEE SigPort},
title = {S3D: Stacking Segmental P3D for Action Quality Assessment},
year = {2018} }
TY - EJOUR
T1 - S3D: Stacking Segmental P3D for Action Quality Assessment
AU - Ye Tian; Austin Reiter; Gregory D. Hager; Trac D. Tran
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3501
ER -
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). S3D: Stacking Segmental P3D for Action Quality Assessment. IEEE SigPort. http://sigport.org/3501
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran, 2018. S3D: Stacking Segmental P3D for Action Quality Assessment. Available at: http://sigport.org/3501.
Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. (2018). "S3D: Stacking Segmental P3D for Action Quality Assessment." Web.
1. Ye Tian, Austin Reiter, Gregory D. Hager, Trac D. Tran. S3D: Stacking Segmental P3D for Action Quality Assessment [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3501

DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION


This paper proposes a method, Dense ByNet, for single image super-resolution based on a convolutional neural network (CNN). The main innovation is a new architecture that combines several CNN design choices. Using a residual network as a basis, it introduces dense connections inside residual blocks, significantly reducing the number of parameters. Second, we apply dilation convolutions to increase the spatial context. Lastly, we propose modifications to the activation and cost functions.

Paper Details

Authors:
Jiu Xu, Yeongnam Chae, Bjorn Stenger, Ankur Datta
Submitted On:
5 October 2018 - 1:41am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

NEW_R_Dense ByNet.pdf

(7)

Subscribe

[1] Jiu Xu, Yeongnam Chae, Bjorn Stenger, Ankur Datta, "DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3495. Accessed: Apr. 22, 2019.
@article{3495-18,
url = {http://sigport.org/3495},
author = {Jiu Xu; Yeongnam Chae; Bjorn Stenger; Ankur Datta },
publisher = {IEEE SigPort},
title = {DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION},
year = {2018} }
TY - EJOUR
T1 - DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION
AU - Jiu Xu; Yeongnam Chae; Bjorn Stenger; Ankur Datta
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3495
ER -
Jiu Xu, Yeongnam Chae, Bjorn Stenger, Ankur Datta. (2018). DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION. IEEE SigPort. http://sigport.org/3495
Jiu Xu, Yeongnam Chae, Bjorn Stenger, Ankur Datta, 2018. DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION. Available at: http://sigport.org/3495.
Jiu Xu, Yeongnam Chae, Bjorn Stenger, Ankur Datta. (2018). "DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION." Web.
1. Jiu Xu, Yeongnam Chae, Bjorn Stenger, Ankur Datta. DENSE BYNET: RESIDUAL DENSE NETWORK FOR IMAGE SUPER RESOLUTION [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3495

SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting


Interactive image segmentation that uses a bounding box containing the foreground has gained great popularity because of its convenience. However, its performance is often degraded when the bounding box is not tight enough or covers large background regions. To solve this problem, this paper proposes a novel segmentation algorithm called ``SuperCut". This algorithm provides robust segmentation in one cut even with loose bounding boxes.

Paper Details

Authors:
Shuqiong Wu, Megumi Nakao, and Tetsuya Matsuda
Submitted On:
5 October 2018 - 1:32am
Short Link:
Type:
Event:
Presenter's Name:
Paper Code:
Document Year:
Cite

Document Files

Poster.pdf

(670)

Subscribe

[1] Shuqiong Wu, Megumi Nakao, and Tetsuya Matsuda, "SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting", IEEE SigPort, 2018. [Online]. Available: http://sigport.org/3494. Accessed: Apr. 22, 2019.
@article{3494-18,
url = {http://sigport.org/3494},
author = {Shuqiong Wu; Megumi Nakao; and Tetsuya Matsuda },
publisher = {IEEE SigPort},
title = {SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting},
year = {2018} }
TY - EJOUR
T1 - SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting
AU - Shuqiong Wu; Megumi Nakao; and Tetsuya Matsuda
PY - 2018
PB - IEEE SigPort
UR - http://sigport.org/3494
ER -
Shuqiong Wu, Megumi Nakao, and Tetsuya Matsuda. (2018). SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting. IEEE SigPort. http://sigport.org/3494
Shuqiong Wu, Megumi Nakao, and Tetsuya Matsuda, 2018. SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting. Available at: http://sigport.org/3494.
Shuqiong Wu, Megumi Nakao, and Tetsuya Matsuda. (2018). "SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting." Web.
1. Shuqiong Wu, Megumi Nakao, and Tetsuya Matsuda. SuperCut: Superpixel Based Foreground Extraction With Loose Bounding Boxes in One Cutting [Internet]. IEEE SigPort; 2018. Available from : http://sigport.org/3494

Pages